Matching Items (26)
Filtering by

Clear all filters

153487-Thumbnail Image.png
Description
Internet browsers are today capable of warning internet users of a potential phishing attack. Browsers identify these websites by referring to blacklists of reported phishing websites maintained by trusted organizations like Google, Phishtank etc. On identifying a Unified Resource Locator (URL) requested by a user as a reported phishing URL,

Internet browsers are today capable of warning internet users of a potential phishing attack. Browsers identify these websites by referring to blacklists of reported phishing websites maintained by trusted organizations like Google, Phishtank etc. On identifying a Unified Resource Locator (URL) requested by a user as a reported phishing URL, browsers like Mozilla Firefox and Google Chrome display an 'active' warning message in an attempt to stop the user from making a potentially dangerous decision of visiting the website and sharing confidential information like username-password, credit card information, social security number etc.

However, these warnings are not always successful at safeguarding the user from a phishing attack. On several occasions, users ignore these warnings and 'click through' them, eventually landing at the potentially dangerous website and giving away confidential information. Failure to understand the warning, failure to differentiate different types of browser warnings, diminishing trust on browser warnings due to repeated encounter are some of the reasons that make users ignore these warnings. It is important to address these factors in order to eventually improve a user’s reaction to these warnings.

In this thesis, I propose a novel design to improve the effectiveness and reliability of phishing warning messages. This design utilizes the name of the target website that a fake website is mimicking, to display a simple, easy to understand and interactive warning message with the primary objective of keeping the user away from a potentially spoof website.
ContributorsSharma, Satyabrata (Author) / Bazzi, Rida (Thesis advisor) / Walker, Erin (Committee member) / Gaffar, Ashraf (Committee member) / Arizona State University (Publisher)
Created2015
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150895-Thumbnail Image.png
Description
Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged

Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged subscribers. Minimal Perfect Hash Families (PHFs) have been shown to provide a basis for the construction of memory-efficient t-resilient Key Pre-distribution Schemes (KPSs) from multiple instances of 1-resilient KPSs. Using this technique, the task of constructing a large t-resilient BES is reduced to finding a near-minimal PHF of appropriate parameters. While combinatorial and probabilistic constructions exist for minimal PHFs with certain parameters, the complexity of constructing them in general is currently unknown. This thesis introduces a new type of hash family, called a Scattering Hash Family (ScHF), which is designed to allow for the scalable and ingredient-independent design of memory-efficient BESs for large parameters, specifically resilience and total number of subscribers. A general BES construction using ScHFs is shown, which constructs t-resilient KPSs from other KPSs of any resilience ≤w≤t. In addition to demonstrating how ScHFs can be used to produce BESs , this thesis explores several ScHF construction techniques. The initial technique demonstrates a probabilistic, non-constructive proof of existence for ScHFs . This construction is then derandomized into a direct, polynomial time construction of near-minimal ScHFs using the method of conditional expectations. As an alternative approach to direct construction, representing ScHFs as a k-restriction problem allows for the indirect construction of ScHFs via randomized post-optimization. Using the methods defined, ScHFs are constructed and the parameters' effects on solution size are analyzed. For large strengths, constructive techniques lose significant performance, and as such, asymptotic analysis is performed using the non-constructive existential results. This work concludes with an analysis of the benefits and disadvantages of BESs based on the constructed ScHFs. Due to the novel nature of ScHFs, the results of this analysis are used as the foundation for an empirical comparison between ScHF-based and PHF-based BESs . The primary bases of comparison are construction efficiency, key material requirements, and message transmission overhead.
ContributorsO'Brien, Devon James (Author) / Colbourn, Charles J (Thesis advisor) / Bazzi, Rida (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2012
154142-Thumbnail Image.png
Description
A load balancer is an essential part of many network systems. A load balancer is capable of dividing and redistributing incoming network traffic to different back end servers, thus improving reliability and performance. Existing load balancing solutions can be classified into two categories: hardware-based or software-based. Hardware-based load balancing systems

A load balancer is an essential part of many network systems. A load balancer is capable of dividing and redistributing incoming network traffic to different back end servers, thus improving reliability and performance. Existing load balancing solutions can be classified into two categories: hardware-based or software-based. Hardware-based load balancing systems are hard to manage and force network administrators to scale up (replacing with more powerful but expensive hardware) when their system can not handle the growing traffic. Software-based solutions have a limitation when dealing with a single large TCP flow. In recent years, with the fast developments of virtualization technology, a new trend of network function virtualization (NFV) is being adopted. Instead of using proprietary hardware, an NFV network infrastructure uses virtual machines running to implement network functions such as load balancers, firewalls, etc. In this thesis, a new load balancing system is designed and evaluated. This system is high performance and flexible. It can fully utilize the bandwidth between a load balancer and back end servers compared to traditional load balancers such as HAProxy. The experimental results show that using this NFV load balancer could have $n$ ($n$ is the number of back end servers) times better performance than HAProxy. Also, an extract, transform and load (ETL) application was implemented to demonstrate that this load balancer can shorten data load time. The experiment shows that when loading a large data set (18.3GB), our load balancer needs only 28\% less time than traditional load balancer.
ContributorsWu, Jinxuan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Bazzi, Rida (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2015
156331-Thumbnail Image.png
Description
Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and

Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and efficient, search over that graph.

To facilitate rapid, correct, efficient, and intuitive development of graph based solutions we propose a new programming language construct - the search statement. Given a supra-root node, a procedure which determines the children of a given parent node, and optional definitions of the fail-fast acceptance or rejection of a solution, the search statement can conduct a search over any graph or network. Structurally, this statement is modelled after the common switch statement and is put into a largely imperative/procedural context to allow for immediate and intuitive development by most programmers. The Go programming language has been used as a foundation and proof-of-concept of the search statement. A Go compiler is provided which implements this construct.
ContributorsHenderson, Christopher (Author) / Bansal, Ajay (Thesis advisor) / Lindquist, Timothy (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2018
157028-Thumbnail Image.png
Description
Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness,

Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness, but limited studies compared different statistical techniques with latest frameworks, and interpreted models in a unified approach.

In this thesis, several data mining algorithms have been applied to analyze students’ code assignment submission data from a real classroom study. The goal of this work is to explore

and predict students’ performances. Multiple machine learning models and the model accuracy were evaluated based on the Shapley Additive Explanation.

The Cross-Validation shows the Gradient Boosting Decision Tree has the best precision 85.93% with average 82.90%. Features like Component grade, Due Date, Submission Times have higher impact than others. Baseline model received lower precision due to lack of non-linear fitting.
ContributorsTian, Wenbo (Author) / Hsiao, Ihan (Thesis advisor) / Bazzi, Rida (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2019
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
Description
As computers become a more embedded aspect of daily life, the importance of communicating ideas in computing and technology to the general public has become increasingly apparent. One such growing technology is electronic voting. The feasibility of explaining electronic voting protocols was directly investigated through the generation of a presentation

As computers become a more embedded aspect of daily life, the importance of communicating ideas in computing and technology to the general public has become increasingly apparent. One such growing technology is electronic voting. The feasibility of explaining electronic voting protocols was directly investigated through the generation of a presentation based on journal articles and papers identified by the investigator. Extensive use of analogy and visual aids were used to explain various cryptographic concepts. The presentation was then given to a classroom of ASU freshmen, followed by a feedback survey. A self-evaluation on the presentation methods is conducted, and a procedure for explaining subjects in computer science is proposed based on the researcher's personal process.
ContributorsReniewicki, Peter Josef (Author) / Bazzi, Rida (Thesis director) / Childress, Nancy (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134339-Thumbnail Image.png
Description
Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that

Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that are straightforward in a non-distributed system, like deciding on the value of a global system state, can be quite complicated to achieve in a distributed system [1]. On top of the difficulties caused by the distributed nature of the computations, distributed systems typically need to be able to operate normally even if some of the nodes in the system are faulty which further adds to the uncertainty that processes have about the global state. Many factors make the implementation of a distributed algorithms difficult. Design patterns [2] are useful in simplifying the development of general algorithms. A design pattern describes a high level solution to a common, abstract problem that many systems may face. Common structural, creational, and behavioral problems are identified and elegantly solved by design patterns. By identifying features that an algorithm uses, and framing each feature as one of the common problems that a specific design pattern solves, designing a robust implementation of an algorithm becomes more manageable. In this way, design patterns can aid the implementation of algorithms. Unfortunately, design patterns are typically not discussed when developing distributed algorithms. Because correctly developing a distributed algorithm is difficult, many papers (eg. [1], [3], [4]) focus on verifying the correctness of the developed algorithm. Papers that are more practical ([5], [6]) establish the correctness of their algorithm and that their algorithm is efficient enough to be practical. However, papers on distributed algorithms usually make little mention of design patterns. The goal of this work was to gain experience implementing distributed systems including learning the application of design patterns and the application of related technical topics. This was achieved by implementing a currently unpublished algorithm that is tentatively called Bakery Consensus. Bakery Consensus is a replicated state-machine protocol that can tolerate servers with Byzantine faults, but assumes non-faulty clients. The algorithm also establishes non-skipping timestamps for each operation completed by the replicated state-machine. The design of the structure, communication, and creation of the different system parts depended heavily upon the book Design Patterns [2]. After implementing the system, the success of the in implementing its various parts was based upon their ability to satisfy the SOLID [7] principles as well as their ability to establish low coupling and high cohesion [8]. The rest of this paper is organized as follows. We begin by providing background information about distributed algorithms, including replicated state-machine protocols and the Bakery Consensus algorithm. Section 3 gives a background on several design patterns and software engineering principles that were used in the development process. Section 4 discusses the well designed parts of the system that used design patterns, and how these design patterns were chosen. Section 5 discusses well designed system parts that relied upon other technical topics. Section 6 discusses system parts that need redesign. The conclusion summarizes what was accomplished by the implementation process and the lessons learned about design patterns for distributed algorithms.
ContributorsStoutenburg, Tristan Kaleb (Author) / Bazzi, Rida (Thesis director) / Richa, Andrea (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135148-Thumbnail Image.png
Description
\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for ease of development and heavy optimizability in ways that no

\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for ease of development and heavy optimizability in ways that no other programming language is. Building on the principles of Assembly level efficiency, referential integrity, and high order functionality, this language is able to produce extremely efficient code; meanwhile, programmatically defined English-based reusable syntax and a strong, static type system make \English easier to read and write than many existing programming languages. Its generalization of all language structures and components to operators leaves the language syntax open to project-specific syntactical structuring, making it more easily applicable in more cases. The thesis project requirements came in three parts: a compiler to compile \English code into NASM Assembly to produce a final program product; a standard library to define many of the basic operations of the language, including the creation of lists; and C translation library that would utilize \English properties to compile C code using the \English compiler. Though designed and partially coded, the compiler remains incomplete. The standard library, C translation library, and design of the language were completed. Additional tools regarding the language design and implementation were also created, including a Gedit syntax highlighting configuration file; usage documentation describing in a tutorial style the basic usage of the language; and more. Though the thesis project itself may be complete, the \English project will continue in order to produce a new language capable of the abilities possible with the design of this language.
ContributorsDavey, Connor (Author) / Gupta, Sandeep (Thesis director) / Bazzi, Rida (Committee member) / Calliss, Debra (Committee member) / Barrett, The Honors College (Contributor)
Created2016-05