This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 3 of 3
Filtering by

Clear all filters

189297-Thumbnail Image.png
Description
This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other

This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other words, the complex architecture and millions of parameters present challenges in finding the right balance between capturing useful patterns and avoiding noise in the data. To address these issues, this thesis explores novel solutions based on knowledge distillation, enabling the learning of robust representations. Leveraging the capabilities of large-scale networks, effective learning strategies are developed. Moreover, the limitations of dependency on external networks in the distillation process, which often require large-scale models, are effectively overcome by proposing a self-distillation strategy. The proposed approach empowers the model to generate high-level knowledge within a single network, pushing the boundaries of knowledge distillation. The effectiveness of the proposed method is not only demonstrated across diverse applications, including image classification, object detection, and semantic segmentation but also explored in practical considerations such as handling data scarcity and assessing the transferability of the model to other learning tasks. Another major obstacle hindering the development of reliable and robust models lies in their black-box nature, impeding clear insights into the contributions toward the final predictions and yielding uninterpretable feature representations. To address this challenge, this thesis introduces techniques that incorporate simple yet powerful deep constraints rooted in Riemannian geometry. These constraints confer geometric qualities upon the latent representation, thereby fostering a more interpretable and insightful representation. In addition to its primary focus on general tasks like image classification and activity recognition, this strategy offers significant benefits in real-world applications where data scarcity is prevalent. Moreover, its robustness in feature removal showcases its potential for edge applications. By successfully tackling these challenges, this research contributes to advancing the field of machine learning and provides a foundation for building more reliable and robust systems across various application domains.
ContributorsChoi, Hongjun (Author) / Turaga, Pavan (Thesis advisor) / Jayasuriya, Suren (Committee member) / Li, Wenwen (Committee member) / Fazli, Pooyan (Committee member) / Arizona State University (Publisher)
Created2023
171756-Thumbnail Image.png
Description
Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented level of connectivity. Despite the benefits social media offers, it

Social media has become a primary means of communication and a prominent source of information about day-to-day happenings in the contemporary world. The rise in the popularity of social media platforms in recent decades has empowered people with an unprecedented level of connectivity. Despite the benefits social media offers, it also comes with disadvantages. A significant downside to staying connected via social media is the susceptibility to falsified information or Fake News. Easy accessibility to social media and lack of truth verification tools favored the miscreants on online platforms to spread false propaganda at scale, ensuing chaos. The spread of misinformation on these platforms ultimately leads to mistrust and social unrest. Consequently, there is a need to counter the spread of misinformation which could otherwise have a detrimental impact on society. A notable example of such a case is the 2019 Covid pandemic misinformation spread, where coordinated misinformation campaigns misled the public on vaccination and health safety. The advancements in Natural Language Processing gave rise to sophisticated language generation models that can generate realistic-looking texts. Although the current Fake News generation process is manual, it is just a matter of time before this process gets automated at scale and generates Neural Fake News using language generation models like the Bidirectional Encoder Representations from Transformers (BERT) and the third generation Generative Pre-trained Transformer (GPT-3). Moreover, given that the current state of fact verification is manual, it calls for an urgent need to develop reliable automated detection tools to counter Neural Fake News generated at scale. Existing tools demonstrate state-of-the-art performance in detecting Neural Fake News but exhibit a black box behavior. Incorporating explainability into the Neural Fake News classification task will build trust and acceptance amongst different communities and decision-makers. Therefore, the current study proposes a new set of interpretable discriminatory features. These features capture statistical and stylistic idiosyncrasies, achieving an accuracy of 82% on Neural Fake News classification. Furthermore, this research investigates essential dependency relations contributing to the classification process. Lastly, the study concludes by providing directions for future research in building explainable tools for Neural Fake News detection.
ContributorsKarumuri, Ravi Teja (Author) / Liu, Huan (Thesis advisor) / Corman, Steven (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2022
193452-Thumbnail Image.png
Description
Social media platforms have become widely used for open communication, yet their lack of moderation has led to the proliferation of harmful content, including hate speech. Manual monitoring of such vast amounts of user-generated data is impractical, thus necessitating automated hate speech detection methods. Pre-trained language models have been proven

Social media platforms have become widely used for open communication, yet their lack of moderation has led to the proliferation of harmful content, including hate speech. Manual monitoring of such vast amounts of user-generated data is impractical, thus necessitating automated hate speech detection methods. Pre-trained language models have been proven to possess strong base capabilities, which not only excel at in-distribution language modeling but also show powerful abilities in out-of-distribution language modeling, transfer learning and few-shot learning. However, these models operate as complex function approximators, mapping input text to a hate speech classification, without providing any insights into the reasoning behind their predictions. Hence, existing methods often lack transparency, hindering their effectiveness, particularly in sensitive content moderation contexts. Recent efforts have been made to integrate their capabilities with large language models like ChatGPT and Llama2, which exhibit reasoning capabilities and broad knowledge utilization. This thesis explores leveraging the reasoning abilities of large language models to enhance the interpretability of hate speech detection. A novel framework is proposed that utilizes state-of-the-art Large Language Models (LLMs) to extract interpretable rationales from input text, highlighting key phrases or sentences relevant to hate speech classification. By incorporating these rationale features into a hate speech classifier, the framework inherently provides transparent and interpretable results. This approach combines the language understanding prowess of LLMs with the discriminative power of advanced hate speech classifiers, offering a promising solution to the challenge of interpreting automated hate speech detection models.
ContributorsNirmal, Ayushi (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Wei, Hua (Committee member) / Arizona State University (Publisher)
Created2024