Matching Items (83)
Filtering by

Clear all filters

136283-Thumbnail Image.png
Description
This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation

This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation systems are also discussed, along with possible areas of future work.
ContributorsLane, Ryan Andrew (Author) / Bansal, Ajay (Thesis director) / Bansal, Srividya (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
132566-Thumbnail Image.png
Description
ASU’s Software Engineering (SER) program adequately prepares students for what happens after they become a developer, but there is no standard for preparing students to secure a job post-graduation in the first place. This project creates and executes a supplemental curriculum to prepare students for the technical interview process. The

ASU’s Software Engineering (SER) program adequately prepares students for what happens after they become a developer, but there is no standard for preparing students to secure a job post-graduation in the first place. This project creates and executes a supplemental curriculum to prepare students for the technical interview process. The trial run of the curriculum was received positively by study participants, who experienced an increase in confidence over the duration of the workshop.
ContributorsSchmidt, Julia J (Author) / Roscoe, Rod (Thesis director) / Bansal, Srividya (Committee member) / Software Engineering (Contributor) / Human Systems Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
171778-Thumbnail Image.png
Description
Honeypots – cyber deception technique used to lure attackers into a trap. They contain fake confidential information to make an attacker believe that their attack has been successful. One of the prerequisites for a honeypot to be effective is that it needs to be undetectable. Deploying sniffing and event logging

Honeypots – cyber deception technique used to lure attackers into a trap. They contain fake confidential information to make an attacker believe that their attack has been successful. One of the prerequisites for a honeypot to be effective is that it needs to be undetectable. Deploying sniffing and event logging tools alongside the honeypot also helps understand the mindset of the attacker after successful attacks. Is there any data that backs up the claim that honeypots are effective in real life scenarios? The answer is no.Game-theoretic models have been helpful to approximate attacker and defender actions in cyber security. However, in the past these models have relied on expert- created data. The goal of this research project is to determine the effectiveness of honeypots using real-world data. So, how to deploy effective honeypots? This is where honey-patches come into play. Honey-patches are software patches designed to hinder the attacker’s ability to determine whether an attack has been successful or not. When an attacker launches a successful attack on a software, the honey-patch transparently redirects the attacker into a honeypot. The honeypot contains fake information which makes the attacker believe they were successful while in reality they were not. After conducting a series of experiments and analyzing the results, there is a clear indication that honey-patches are not the perfect application security solution having both pros and cons.
ContributorsChauhan, Purv Rakeshkumar (Author) / Doupe, Adam (Thesis advisor) / Bao, Youzhi (Committee member) / Wang, Ruoyu (Committee member) / Arizona State University (Publisher)
Created2022
171562-Thumbnail Image.png
Description
Distributed self-assessments and reflections empower learners to take the lead on their knowledge gaining evaluation. Both provide essential elements for practice and self-regulation in learning settings. Nowadays, many sources for practice opportunities are made available to the learners, especially in the Computer Science (CS) and programming domain. They may choose

Distributed self-assessments and reflections empower learners to take the lead on their knowledge gaining evaluation. Both provide essential elements for practice and self-regulation in learning settings. Nowadays, many sources for practice opportunities are made available to the learners, especially in the Computer Science (CS) and programming domain. They may choose to utilize these opportunities to self-assess their learning progress and practice their skill. My objective in this thesis is to understand to what extent self-assess process can impact novice programmers learning and what advanced learning technologies can I provide to enhance the learner’s outcome and the progress. In this dissertation, I conducted a series of studies to investigate learning analytics and students’ behaviors in working on self-assessments and reflection opportunities. To enable this objective, I designed a personalized learning platform named QuizIT that provides daily quizzes to support learners in the computer science domain. QuizIT adopts an Open Social Student Model (OSSM) that supports personalized learning and serves as a self-assessment system. It aims to ignite self-regulating behavior and engage students in the self-assessment and reflective procedure. I designed and integrated the personalized practice recommender to the platform to investigate the self-assessment process. I also evaluated the self-assessment behavioral trails as a predictor to the students’ performance. The statistical indicators suggested that the distributed reflections were associated with the learner's performance. I proceeded to address whether distributed reflections enable self-regulating behavior and lead to better learning in CS introductory courses. From the student interactions with the system, I found distinct behavioral patterns that showed early signs of the learners' performance trajectory. The utilization of the personalized recommender improved the student’s engagement and performance in the self-assessment procedure. When I focused on enhancing reflections impact during self-assessment sessions through weekly opportunities, the learners in the CS domain showed better self-regulating learning behavior when utilizing those opportunities. The weekly reflections provided by the learners were able to capture more reflective features than the daily opportunities. Overall, this dissertation demonstrates the effectiveness of the learning technologies, including adaptive recommender and reflection, to support novice programming learners and their self-assessing processes.
ContributorsAlzaid, Mohammed (Author) / Hsiao, Ihan (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / VanLehn, Kurt (Committee member) / Nelson, Brian (Committee member) / Bansal, Srividya (Committee member) / Arizona State University (Publisher)
Created2022
171448-Thumbnail Image.png
Description
The adoption of Open Source Software (OSS) by organizations has become a strategic need in a wide variety of software applications and platforms. Open Source has changed the way organizations develop, acquire, use, and commercialize software. Further, OSS projects often incorporate similar principles and practices as Agile and Lean software

The adoption of Open Source Software (OSS) by organizations has become a strategic need in a wide variety of software applications and platforms. Open Source has changed the way organizations develop, acquire, use, and commercialize software. Further, OSS projects often incorporate similar principles and practices as Agile and Lean software development projects. Contrary to traditional organizations, the environment in which these projects function has an impact on process-related elements like the flow of work and value definition. Process metrics are typically employed during Agile Software Engineering projects as a means of providing meaningful feedback. Investigating these metrics to see if OSS projects and communities can utilize them in a beneficial way thus becomes an interesting research topic. In that context, this exploratory research investigates whether well-established Agile and Lean software engineering metrics provide useful feedback about OSS projects. This knowledge will assist in educating the Open Source community about the applications of Agile Software Engineering and its variations in Open Source projects. Each of the Open Source projects included in this analysis has a substantial development team that maintains a mature, well-established codebase with process flow information. These OSS projects listed on GitHub are investigated by applying process flow metrics. The methodology used to collect these metrics and relevant findings are discussed in this thesis. This study also compares the results to distinctive Open Source project characteristics as part of the analysis. In this exploratory research best-fit versions of published Agile and Lean software process metrics are applied to OSS, and following these explorations, specific questions are further addressed using the data collected. This research's original contribution is to determine whether Agile and Lean process metrics are helpful in OSS, as well as the opportunities and obstacles that may arise when applying Agile and Lean principles to OSS.
ContributorsSuresh, Disha (Author) / Gary, Kevin (Thesis advisor) / Bansal, Srividya (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2022
190728-Thumbnail Image.png
Description
Human civilization within the last two decades has largely transformed into an online one, with many of its associated activities taking place on computers and complex networked systems -- their analog and real-world equivalents having been rendered obsolete.These activities run the gamut from the ordinary and mundane, like ordering food,

Human civilization within the last two decades has largely transformed into an online one, with many of its associated activities taking place on computers and complex networked systems -- their analog and real-world equivalents having been rendered obsolete.These activities run the gamut from the ordinary and mundane, like ordering food, to complex and large-scale, such as those involving critical infrastructure or global trade and communications. Unfortunately, the activities of human civilization also involve criminal, adversarial, and malicious ones with the result that they also now have their digital equivalents. Ransomware, malware, and targeted cyberattacks are a fact of life today and are instigated not only by organized criminal gangs, but adversarial nation-states and organizations as well. Needless to say, such actions result in disastrous and harmful real-world consequences. As the complexity and variety of software has evolved, so too has the ingenuity of attacks that exploit them; for example modern cyberattacks typically involve sequential exploitation of multiple software vulnerabilities.Compared to a decade ago, modern software stacks on personal computers, laptops, servers, mobile phones, and even Internet of Things (IoT) devices involve a dizzying array of interdependent programs and software libraries, with each of these components presenting attractive attack-surfaces for adversarial actors. However, the responses to this still rely on paradigms that can neither react quickly enough nor scale to increasingly dynamic, ever-changing, and complex software environments. Better approaches are therefore needed, that can assess system readiness and vulnerabilities, identify potential attack vectors and strategies (including ways to counter them), and proactively detect vulnerabilities in complex software before they can be exploited. In this dissertation, I first present a mathematical model and associated algorithms to identify attacker strategies for sequential cyberattacks based on attacker state, attributes and publicly-available vulnerability information.Second, I extend the model and design algorithms to help identify defensive courses of action against attacker strategies. Finally, I present my work to enhance the ability of coverage-based fuzzers to identify software vulnerabilities by providing visibility into complex, internal program-states.
ContributorsPaliath, Vivin Suresh (Author) / Doupe, Adam (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Wang, Ruoyu (Committee member) / Shakarian, Paulo (Committee member) / Arizona State University (Publisher)
Created2023
190944-Thumbnail Image.png
Description
The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between

The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between a model’s input and output. Popular variants of these attacks have been shown to be deterred by countermeasures that poison predicted class distributions and mask class boundary gradients. Neural networks are also vulnerable to timing side-channel attacks. This work builds on top of Subneural, an attack framework that uses floating point timing side channels to extract neural structures. Novel applications of addition timing side channels are introduced, allowing the signs and arrangements of leaked parameters to be discerned more efficiently. Addition timing is also used to leak network biases, making the framework applicable to a wider range of targets. The enhanced framework is shown to be effective against models protected by prediction poisoning and gradient masking adversarial countermeasures and to be competitive with adaptive black box adversarial attacks against stateful defenses. Mitigations necessary to protect against floating-point timing side-channel attacks are also presented.
ContributorsVipat, Gaurav (Author) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2023
190879-Thumbnail Image.png
Description
Open Information Extraction (OIE) is a subset of Natural Language Processing (NLP) that constitutes the processing of natural language into structured and machine-readable data. This thesis uses data in Resource Description Framework (RDF) triple format that comprises of a subject, predicate, and object. The extraction of RDF triples from

Open Information Extraction (OIE) is a subset of Natural Language Processing (NLP) that constitutes the processing of natural language into structured and machine-readable data. This thesis uses data in Resource Description Framework (RDF) triple format that comprises of a subject, predicate, and object. The extraction of RDF triples from natural language is an essential step towards importing data into web ontologies as part of the linked open data cloud on the Semantic web. There have been a number of related techniques for extraction of triples from plain natural language text including but not limited to ClausIE, OLLIE, Reverb, and DeepEx. This proposed study aims to reduce the dependency on conventional machine learning models since they require training datasets, and the models are not easily customizable or explainable. By leveraging a context-free grammar (CFG) based model, this thesis aims to address some of these issues while minimizing the trade-offs on performance and accuracy. Furthermore, a deep-dive is conducted to analyze the strengths and limitations of the proposed approach.
ContributorsSingh, Varun (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2023
189330-Thumbnail Image.png
Description
This thesis presents a study on the fuzzing of Linux binaries to find occluded bugs. Fuzzing is a widely-used technique for identifying software bugs. Despite their effectiveness, state-of-the-art fuzzers suffer from limitations in efficiency and effectiveness. Fuzzers based on random mutations are fast but struggle to generate high-quality inputs. In

This thesis presents a study on the fuzzing of Linux binaries to find occluded bugs. Fuzzing is a widely-used technique for identifying software bugs. Despite their effectiveness, state-of-the-art fuzzers suffer from limitations in efficiency and effectiveness. Fuzzers based on random mutations are fast but struggle to generate high-quality inputs. In contrast, fuzzers based on symbolic execution produce quality inputs but lack execution speed. This paper proposes FlakJack, a novel hybrid fuzzer that patches the binary on the go to detect occluded bugs guarded by surface bugs. To dynamically overcome the challenge of patching binaries, the paper introduces multiple patching strategies based on the type of bug detected. The performance of FlakJack was evaluated on ten widely-used real-world binaries and one chaff dataset binary. The results indicate that many bugs found recently were already present in previous versions but were occluded by surface bugs. FlakJack’s approach improved the bug-finding ability by patching surface bugs that usually guard occluded bugs, significantly reducing patching cycles. Despite its unbalanced approach compared to other coverage-guided fuzzers, FlakJack is fast, lightweight, and robust. False- Positives can be filtered out quickly, and the approach is practical in other parts of the target. The paper shows that the FlakJack approach can significantly improve fuzzing performance without relying on complex strategies.
ContributorsPraveen Menon, Gokulkrishna (Author) / Bao, Tiffany (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2023
171701-Thumbnail Image.png
Description
Reverse engineering is a process focused on gaining an understanding for the intricaciesof a system. This practice is critical in cybersecurity as it promotes the findings and patching of vulnerabilities as well as the counteracting of malware. Disassemblers and decompilers have become essential when reverse engineering due to the readability of information they

Reverse engineering is a process focused on gaining an understanding for the intricaciesof a system. This practice is critical in cybersecurity as it promotes the findings and patching of vulnerabilities as well as the counteracting of malware. Disassemblers and decompilers have become essential when reverse engineering due to the readability of information they transcribe from binary files. However, these tools still tend to produce involved and complicated outputs that hinder the acquisition of knowledge during binary analysis. Cognitive Load Theory (CLT) explains that this hindrance is due to the human brain’s inability to process superfluous amounts of data. CLT classifies this data into three cognitive load types — intrinsic, extraneous, and germane — that each can help gauge complex procedures. In this research paper, a novel program call graph is presented accounting for these CLT principles. The goal of this graphical view is to reduce the cognitive load tied to the depiction of binary information and to enhance the overall binary analysis process. This feature was implemented within the binary analysis tool, angr and it’s user interface counterpart, angr-management. Additionally, this paper will examine a conducted user study to quantitatively and qualitatively evaluate the effectiveness of the newly proposed proximity view (PV). The user study includes a binary challenge solving portion measured by defined metrics and a survey phase to receive direct participant feedback regarding the view. The results from this study show statistically significant evidence that PV aids in challenge solving and improves the overall understanding binaries. The results also signify that this improvement comes with the cost of time. The survey section of the user study further indicates that users find PV beneficial to the reverse engineering process, but additional information needs to be included in future developments.
ContributorsSmits, Sean (Author) / Wang, Ruoyu (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2022