Matching Items (96)
Filtering by

Clear all filters

150046-Thumbnail Image.png
Description
This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation

This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation awareness. CyberCog provides an interactive environment for conducting human-in-loop experiments in which the participants of the experiment perform the tasks of a cyber security defense analyst in response to a cyber-attack scenario. CyberCog generates the necessary performance measures and interaction logs needed for measuring individual and team cyber situation awareness. Moreover, the CyberCog environment provides good experimental control for conducting effective situation awareness studies while retaining realism in the scenario and in the tasks performed.
ContributorsRajivan, Prashanth (Author) / Femiani, John (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Lindquist, Timothy (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2011
150509-Thumbnail Image.png
Description
Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other

Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other steps. In order to produce better quality software, the requirements have to be free of any defects. Verification and Validation (V&V;) of requirements are performed to improve their quality, by performing the V&V; process on the Software Requirement Specification (SRS) document. V&V; of the software requirements focused to a specific domain helps in improving quality. A large database of software requirements from software projects of different domains is created. Software requirements from commercial applications are focus of this project; other domains embedded, mobile, E-commerce, etc. can be the focus of future efforts. The V&V; is done to inspect the requirements and improve the quality. Inspections are done to detect defects in the requirements and three approaches for inspection of software requirements are discussed; ad-hoc techniques, checklists, and scenario-based techniques. A more systematic domain-specific technique is presented for performing V&V; of requirements.
ContributorsChughtai, Rehman (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Millard, Bruce (Committee member) / Arizona State University (Publisher)
Created2012
161629-Thumbnail Image.png
Description
One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in

One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in MOOC. There are different approaches and various features available for the prediction of student’s dropout in MOOC courses.In this research, the data derived from the self-paced math course ‘College Algebra and Problem Solving’ offered on the MOOC platform Open edX offered by Arizona State University (ASU) from 2016 to 2020 was considered. This research aims to predict the dropout of students from a MOOC course given a set of features engineered from the learning of students in a day. Machine Learning (ML) model used is Random Forest (RF) and this model is evaluated using the validation metrics like accuracy, precision, recall, F1-score, Area Under the Curve (AUC), Receiver Operating Characteristic (ROC) curve. The average rate of student learning progress was found to have more impact than other features. The model developed can predict the dropout or continuation of students on any given day in the MOOC course with an accuracy of 87.5%, AUC of 94.5%, precision of 88%, recall of 87.5%, and F1-score of 87.5% respectively. The contributing features and interactions were explained using Shapely values for the prediction of the model. The features engineered in this research are predictive of student dropout and could be used for similar courses to predict student dropout from the course. This model can also help in making interventions at a critical time to help students succeed in this MOOC course.
ContributorsDominic Ravichandran, Sheran Dass (Author) / Gary, Kevin (Thesis advisor) / Bansal, Ajay (Committee member) / Cunningham, James (Committee member) / Sannier, Adrian (Committee member) / Arizona State University (Publisher)
Created2021
171778-Thumbnail Image.png
Description
Honeypots – cyber deception technique used to lure attackers into a trap. They contain fake confidential information to make an attacker believe that their attack has been successful. One of the prerequisites for a honeypot to be effective is that it needs to be undetectable. Deploying sniffing and event logging

Honeypots – cyber deception technique used to lure attackers into a trap. They contain fake confidential information to make an attacker believe that their attack has been successful. One of the prerequisites for a honeypot to be effective is that it needs to be undetectable. Deploying sniffing and event logging tools alongside the honeypot also helps understand the mindset of the attacker after successful attacks. Is there any data that backs up the claim that honeypots are effective in real life scenarios? The answer is no.Game-theoretic models have been helpful to approximate attacker and defender actions in cyber security. However, in the past these models have relied on expert- created data. The goal of this research project is to determine the effectiveness of honeypots using real-world data. So, how to deploy effective honeypots? This is where honey-patches come into play. Honey-patches are software patches designed to hinder the attacker’s ability to determine whether an attack has been successful or not. When an attacker launches a successful attack on a software, the honey-patch transparently redirects the attacker into a honeypot. The honeypot contains fake information which makes the attacker believe they were successful while in reality they were not. After conducting a series of experiments and analyzing the results, there is a clear indication that honey-patches are not the perfect application security solution having both pros and cons.
ContributorsChauhan, Purv Rakeshkumar (Author) / Doupe, Adam (Thesis advisor) / Bao, Youzhi (Committee member) / Wang, Ruoyu (Committee member) / Arizona State University (Publisher)
Created2022
171603-Thumbnail Image.png
Description
A significant proportion of medical errors exist in crucial medical information, and most stem from misinterpreting non-standardized clinical notes. Clinical Skills exam offered by the United States Medical Licensing Examination (USMLE) was put in place to certify patient note-taking skills before medical students joined professional practices, offering the first line

A significant proportion of medical errors exist in crucial medical information, and most stem from misinterpreting non-standardized clinical notes. Clinical Skills exam offered by the United States Medical Licensing Examination (USMLE) was put in place to certify patient note-taking skills before medical students joined professional practices, offering the first line of defense in protecting patients from medical errors. Nonetheless, the exams were discontinued in 2021 following high costs and resource usage in scoring the exams. This thesis compares four transformer-based models, namely BERT (Bidirectional Encoder Representations from Transformers) Base Uncased, Emilyalsentzer Bio_ClinicalBERT, RoBERTa (Robustly Optimized BERT Pre-Training Approach), and DeBERTa (Decoding-enhanced BERT with disentangled attention), with the goal to map free text in patient notes to clinical concepts present in the exam rubric. The impact of context-specific embeddings on BERT was also studied to determine the need for a clinical BERT in Clinical Skills exam. This thesis proposes the use of DeBERTa as a backbone model in patient note scoring for the USMLE Clinical Skills exam after comparing it with three other transformer models. Disentangled attention and enhanced mask decoder integrated into DeBERTa were credited for the high performance of DeBERTa as compared to the other models. Besides, the effect of meta pseudo labeling was also investigated in this thesis, which in turn, further enhanced DeBERTa’s performance.
ContributorsGanesh, Jay (Author) / Bansal, Ajay (Thesis advisor) / Mehlhase, Alexandra (Committee member) / Findler, Michael (Committee member) / Arizona State University (Publisher)
Created2022
171448-Thumbnail Image.png
Description
The adoption of Open Source Software (OSS) by organizations has become a strategic need in a wide variety of software applications and platforms. Open Source has changed the way organizations develop, acquire, use, and commercialize software. Further, OSS projects often incorporate similar principles and practices as Agile and Lean software

The adoption of Open Source Software (OSS) by organizations has become a strategic need in a wide variety of software applications and platforms. Open Source has changed the way organizations develop, acquire, use, and commercialize software. Further, OSS projects often incorporate similar principles and practices as Agile and Lean software development projects. Contrary to traditional organizations, the environment in which these projects function has an impact on process-related elements like the flow of work and value definition. Process metrics are typically employed during Agile Software Engineering projects as a means of providing meaningful feedback. Investigating these metrics to see if OSS projects and communities can utilize them in a beneficial way thus becomes an interesting research topic. In that context, this exploratory research investigates whether well-established Agile and Lean software engineering metrics provide useful feedback about OSS projects. This knowledge will assist in educating the Open Source community about the applications of Agile Software Engineering and its variations in Open Source projects. Each of the Open Source projects included in this analysis has a substantial development team that maintains a mature, well-established codebase with process flow information. These OSS projects listed on GitHub are investigated by applying process flow metrics. The methodology used to collect these metrics and relevant findings are discussed in this thesis. This study also compares the results to distinctive Open Source project characteristics as part of the analysis. In this exploratory research best-fit versions of published Agile and Lean software process metrics are applied to OSS, and following these explorations, specific questions are further addressed using the data collected. This research's original contribution is to determine whether Agile and Lean process metrics are helpful in OSS, as well as the opportunities and obstacles that may arise when applying Agile and Lean principles to OSS.
ContributorsSuresh, Disha (Author) / Gary, Kevin (Thesis advisor) / Bansal, Srividya (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2022
190944-Thumbnail Image.png
Description
The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between

The rise in popularity of applications and services that charge for access to proprietary trained models has led to increased interest in the robustness of these models and the security of the environments in which inference is conducted. State-of-the-art attacks extract models and generate adversarial examples by inferring relationships between a model’s input and output. Popular variants of these attacks have been shown to be deterred by countermeasures that poison predicted class distributions and mask class boundary gradients. Neural networks are also vulnerable to timing side-channel attacks. This work builds on top of Subneural, an attack framework that uses floating point timing side channels to extract neural structures. Novel applications of addition timing side channels are introduced, allowing the signs and arrangements of leaked parameters to be discerned more efficiently. Addition timing is also used to leak network biases, making the framework applicable to a wider range of targets. The enhanced framework is shown to be effective against models protected by prediction poisoning and gradient masking adversarial countermeasures and to be competitive with adaptive black box adversarial attacks against stateful defenses. Mitigations necessary to protect against floating-point timing side-channel attacks are also presented.
ContributorsVipat, Gaurav (Author) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2023
190879-Thumbnail Image.png
Description
Open Information Extraction (OIE) is a subset of Natural Language Processing (NLP) that constitutes the processing of natural language into structured and machine-readable data. This thesis uses data in Resource Description Framework (RDF) triple format that comprises of a subject, predicate, and object. The extraction of RDF triples from

Open Information Extraction (OIE) is a subset of Natural Language Processing (NLP) that constitutes the processing of natural language into structured and machine-readable data. This thesis uses data in Resource Description Framework (RDF) triple format that comprises of a subject, predicate, and object. The extraction of RDF triples from natural language is an essential step towards importing data into web ontologies as part of the linked open data cloud on the Semantic web. There have been a number of related techniques for extraction of triples from plain natural language text including but not limited to ClausIE, OLLIE, Reverb, and DeepEx. This proposed study aims to reduce the dependency on conventional machine learning models since they require training datasets, and the models are not easily customizable or explainable. By leveraging a context-free grammar (CFG) based model, this thesis aims to address some of these issues while minimizing the trade-offs on performance and accuracy. Furthermore, a deep-dive is conducted to analyze the strengths and limitations of the proposed approach.
ContributorsSingh, Varun (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2023
189330-Thumbnail Image.png
Description
This thesis presents a study on the fuzzing of Linux binaries to find occluded bugs. Fuzzing is a widely-used technique for identifying software bugs. Despite their effectiveness, state-of-the-art fuzzers suffer from limitations in efficiency and effectiveness. Fuzzers based on random mutations are fast but struggle to generate high-quality inputs. In

This thesis presents a study on the fuzzing of Linux binaries to find occluded bugs. Fuzzing is a widely-used technique for identifying software bugs. Despite their effectiveness, state-of-the-art fuzzers suffer from limitations in efficiency and effectiveness. Fuzzers based on random mutations are fast but struggle to generate high-quality inputs. In contrast, fuzzers based on symbolic execution produce quality inputs but lack execution speed. This paper proposes FlakJack, a novel hybrid fuzzer that patches the binary on the go to detect occluded bugs guarded by surface bugs. To dynamically overcome the challenge of patching binaries, the paper introduces multiple patching strategies based on the type of bug detected. The performance of FlakJack was evaluated on ten widely-used real-world binaries and one chaff dataset binary. The results indicate that many bugs found recently were already present in previous versions but were occluded by surface bugs. FlakJack’s approach improved the bug-finding ability by patching surface bugs that usually guard occluded bugs, significantly reducing patching cycles. Despite its unbalanced approach compared to other coverage-guided fuzzers, FlakJack is fast, lightweight, and robust. False- Positives can be filtered out quickly, and the approach is practical in other parts of the target. The paper shows that the FlakJack approach can significantly improve fuzzing performance without relying on complex strategies.
ContributorsPraveen Menon, Gokulkrishna (Author) / Bao, Tiffany (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2023
171701-Thumbnail Image.png
Description
Reverse engineering is a process focused on gaining an understanding for the intricaciesof a system. This practice is critical in cybersecurity as it promotes the findings and patching of vulnerabilities as well as the counteracting of malware. Disassemblers and decompilers have become essential when reverse engineering due to the readability of information they

Reverse engineering is a process focused on gaining an understanding for the intricaciesof a system. This practice is critical in cybersecurity as it promotes the findings and patching of vulnerabilities as well as the counteracting of malware. Disassemblers and decompilers have become essential when reverse engineering due to the readability of information they transcribe from binary files. However, these tools still tend to produce involved and complicated outputs that hinder the acquisition of knowledge during binary analysis. Cognitive Load Theory (CLT) explains that this hindrance is due to the human brain’s inability to process superfluous amounts of data. CLT classifies this data into three cognitive load types — intrinsic, extraneous, and germane — that each can help gauge complex procedures. In this research paper, a novel program call graph is presented accounting for these CLT principles. The goal of this graphical view is to reduce the cognitive load tied to the depiction of binary information and to enhance the overall binary analysis process. This feature was implemented within the binary analysis tool, angr and it’s user interface counterpart, angr-management. Additionally, this paper will examine a conducted user study to quantitatively and qualitatively evaluate the effectiveness of the newly proposed proximity view (PV). The user study includes a binary challenge solving portion measured by defined metrics and a survey phase to receive direct participant feedback regarding the view. The results from this study show statistically significant evidence that PV aids in challenge solving and improves the overall understanding binaries. The results also signify that this improvement comes with the cost of time. The survey section of the user study further indicates that users find PV beneficial to the reverse engineering process, but additional information needs to be included in future developments.
ContributorsSmits, Sean (Author) / Wang, Ruoyu (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2022