Matching Items (1,083)
Filtering by

Clear all filters

187470-Thumbnail Image.png
Description
Among the many challenges facing circuit designers in deep sub-micron technologies, power, performance, area (PPA) and process variations are perhaps the most critical. Since existing strategies for reducing power and boosting the performance of the circuit designs have already matured to saturation, it is necessary to explore alternate unconventional strategies.

Among the many challenges facing circuit designers in deep sub-micron technologies, power, performance, area (PPA) and process variations are perhaps the most critical. Since existing strategies for reducing power and boosting the performance of the circuit designs have already matured to saturation, it is necessary to explore alternate unconventional strategies. This investigation focuses on using perceptrons to enhance PPA in digital circuits and starts by constructing the perceptron using a combination of complementary metal-oxide-semiconductor (CMOS) and flash technology. The use of flash enables the perceptron to have a variable delay and functionality, making them robust to process, voltage, and temperature variations. By replacing parts of an application-specific integrated circuit (ASIC) with these perceptrons, improvements of up to 30% in the area and 20% in power can be achieved without affecting performance. Furthermore, the ability to vary the delay of a perceptron enables circuit designers to fix setup and hold-time violations post-fabrication, while reprogramming the functionality enables the obfuscation of the circuits. The study extends to field-programmable gate arrays (FPGAs), showing that traditional FPGA architectures can also achieve improved PPA by replacing some Look-Up-Tables (LUTs) with perceptrons. Considering that replacing parts of traditional digital circuits provides significant improvements in PPA, a natural extension was to see whether circuits built dedicatedly using perceptrons as its compute unit would lead to improvements in energy efficiency. This was demonstrated by developing perceptron-based compute elements and constructing an architecture using these elements for Quantized Neural Network acceleration. The resulting circuit delivered up to 50 times more energy efficiency compared to a CMOS-based accelerator without using standard low-power techniques such as voltage scaling and approximate computing.
ContributorsWagle, Ankit (Author) / Vrudhula, Sarma (Thesis advisor) / Khatri, Sunil (Committee member) / Shrivastava, Aviral (Committee member) / Seo, Jae-Sun (Committee member) / Ren, Fengbo (Committee member) / Arizona State University (Publisher)
Created2023
189385-Thumbnail Image.png
Description
Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of

Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of model vulnerabilities. The complexity of machine learning models, along with the extensive data sets they analyze, can result in unpredictable and unintended outcomes. Model vulnerabilities may manifest due to errors in data input, algorithm design, or model deployment, which can have significant implications for both individuals and society. To prevent such negative outcomes, it is imperative to identify model vulnerabilities at an early stage in the development process. This will aid in guaranteeing the integrity, dependability, and safety of the models, thus mitigating potential risks and enabling the full potential of these technologies to be realized. However, enumerating vulnerabilities can be challenging due to the complexity of the real-world environment. Visual analytics, situated at the intersection of human-computer interaction, computer graphics, and artificial intelligence, offers a promising approach for achieving high interpretability of complex black-box models, thus reducing the cost of obtaining insights into potential vulnerabilities of models. This research is devoted to designing novel visual analytics methods to support the identification and analysis of model vulnerabilities. Specifically, generalizable visual analytics frameworks are instantiated to explore vulnerabilities in machine learning models concerning security (adversarial attacks and data perturbation) and fairness (algorithmic bias). In the end, a visual analytics approach is proposed to enable domain experts to explain and diagnose the model improvement of addressing identified vulnerabilities of machine learning models in a human-in-the-loop fashion. The proposed methods hold the potential to enhance the security and fairness of machine learning models deployed in critical real-world applications.
ContributorsXie, Tiankai (Author) / Maciejewski, Ross (Thesis advisor) / Liu, Huan (Committee member) / Bryan, Chris (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2023
189394-Thumbnail Image.png
Description
One of the challenges in Artificial Intelligence (AI) is to integrate fast, automatic, and intuitive System-1 thinking with slow, deliberate, and logical System-2 thinking. While deep learning approaches excel at perception tasks for System-1, their reasoning capabilities for System-2 are limited. Besides, deep learning approaches are usually data-hungry, hard to

One of the challenges in Artificial Intelligence (AI) is to integrate fast, automatic, and intuitive System-1 thinking with slow, deliberate, and logical System-2 thinking. While deep learning approaches excel at perception tasks for System-1, their reasoning capabilities for System-2 are limited. Besides, deep learning approaches are usually data-hungry, hard to make use of explicit knowledge, and struggling with interpretability and justification. This dissertation presents three neuro-symbolic AI approaches that integrate neural networks (NNs) with symbolic AI methods to address these issues. The first approach presented in this dissertation is NeurASP, which combines NNs with Answer Set Programming (ASP), a logic programming formalism. NeurASP provides an effective way to integrate sub-symbolic and symbolic computation by treating NN outputs as probability distributions over atomic facts in ASP. The explicit knowledge encoded in ASP corrects mistakes in NN outputs and allows for better training with less data. To avoid NeurASP's bottleneck in symbolic computation, this dissertation presents a Constraint Loss via Straight-Through Estimators (CL-STE). CL-STE provides a systematic way to compile discrete logical constraints into a loss function over discretized NN outputs and scales significantly better than state-of-the-art neuro-symbolic methods. This dissertation also presents a finding when CL-STE was applied to Transformers. Transformers can be extended with recurrence to enhance its power for multi-step reasoning. Such Recurrent Transformer can straightforwardly be applied to visual constraint reasoning problems while successfully addressing the symbol grounding problem. Lastly, this dissertation addresses the limitation of pre-trained Large Language Models (LLMs) on multi-step logical reasoning problems with a dual-process neuro-symbolic reasoning system called LLM+ASP, where an LLM (e.g., GPT-3) serves as a highly effective few-shot semantic parser that turns natural language sentences into a logical form that can be used as input to ASP. LLM+ASP achieves state-of-the-art performance on several textual reasoning benchmarks and can handle robot planning tasks that an LLM alone fails to solve.
ContributorsYang, Zhun (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Li, Baoxin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
Description
Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates are alarming. The working hypothesis on why so few college students major in computer science is that most think that it is too hard to learn (Wang, 2017). But I believe the real reason lies in that computer science is not an educational subject that is taught before university, which is too late for most students because by ages 12 to 13 (about seventh to eighth grade) they have decided that computer science concepts are “too difficult” for them to learn (Learning, 2022). Implementing a computer science-based education at an earlier age can possibly circumvent this seen development where students begin to lose confidence and doubt their abilities to learn computer science. This can be done easily by integrating computer science into academic subjects that are already taught in elementary schools such as science, math, and language arts as computer science uses logic, syntax, and other skills that are broadly applicable. Thus, I have created a introductory lesson plan for an elementary school class that incorporates learning how to code with robotics to promote learning computer science principles and destigmatize that it is “too hard” to learn in university.
ContributorsWong, Erika (Author) / Hedges, Craig (Thesis director) / Fischer, Adelheid (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates are alarming. The working hypothesis on why so few college students major in computer science is that most think that it is too hard to learn (Wang, 2017). But I believe the real reason lies in that computer science is not an educational subject that is taught before university, which is too late for most students because by ages 12 to 13 (about seventh to eighth grade) they have decided that computer science concepts are “too difficult” for them to learn (Learning, 2022). Implementing a computer science-based education at an earlier age can possibly circumvent this seen development where students begin to lose confidence and doubt their abilities to learn computer science. This can be done easily by integrating computer science into academic subjects that are already taught in elementary schools such as science, math, and language arts as computer science uses logic, syntax, and other skills that are broadly applicable. Thus, I have created a introductory lesson plan for an elementary school class that incorporates learning how to code with robotics to promote learning computer science principles and destigmatize that it is “too hard” to learn in university.

ContributorsWong, Erika (Author) / Hedges, Craig (Thesis director) / Fischer, Adelheid (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates are alarming. The working hypothesis on why so few college students major in computer science is that most think that it is too hard to learn (Wang, 2017). But I believe the real reason lies in that computer science is not an educational subject that is taught before university, which is too late for most students because by ages 12 to 13 (about seventh to eighth grade) they have decided that computer science concepts are “too difficult” for them to learn (Learning, 2022). Implementing a computer science-based education at an earlier age can possibly circumvent this seen development where students begin to lose confidence and doubt their abilities to learn computer science. This can be done easily by integrating computer science into academic subjects that are already taught in elementary schools such as science, math, and language arts as computer science uses logic, syntax, and other skills that are broadly applicable. Thus, I have created a introductory lesson plan for an elementary school class that incorporates learning how to code with robotics to promote learning computer science principles and destigmatize that it is “too hard” to learn in university.

ContributorsWong, Erika (Author) / Hedges, Craig (Thesis director) / Fischer, Adelheid (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates are alarming. The working hypothesis on why so few college students major in computer science is that most think that it is too hard to learn (Wang, 2017). But I believe the real reason lies in that computer science is not an educational subject that is taught before university, which is too late for most students because by ages 12 to 13 (about seventh to eighth grade) they have decided that computer science concepts are “too difficult” for them to learn (Learning, 2022). Implementing a computer science-based education at an earlier age can possibly circumvent this seen development where students begin to lose confidence and doubt their abilities to learn computer science. This can be done easily by integrating computer science into academic subjects that are already taught in elementary schools such as science, math, and language arts as computer science uses logic, syntax, and other skills that are broadly applicable. Thus, I have created a introductory lesson plan for an elementary school class that incorporates learning how to code with robotics to promote learning computer science principles and destigmatize that it is “too hard” to learn in university.

ContributorsWong, Erika (Author) / Hedges, Craig (Thesis director) / Fischer, Adelheid (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
187663-Thumbnail Image.png
Description
As computing evolves and libraries are produced for developers to create efficientsoftware at a faster rate, the security of a modern program is an area of great concern because complex software breeds vulnerabilities. Due to the criticality of computer systems security, cybersecurity education must maintain pace with the rapidly evolving technology industry. An example

As computing evolves and libraries are produced for developers to create efficientsoftware at a faster rate, the security of a modern program is an area of great concern because complex software breeds vulnerabilities. Due to the criticality of computer systems security, cybersecurity education must maintain pace with the rapidly evolving technology industry. An example of growth in cybersecurity education can be seen in Pwn.college – a publicly available resource composed of modules that teach computer systems security. In reaction to the demand for the expansion of cybersecurity education, the pwn.college developers designed a new set of modules for a course at Arizona State University and offered the same modules for public use. One of these modules, the “babyfile” module, was intended to focus on the exploitation of FILE structures in the C programming language. FILE structures allow for fast and efficient file operations. Unfortunately, FILE structures have severe vulnerabilities which can be exploited to attain elevated privileges for reading data, writing data, and executing instructions. By researching the applications of FILE structure vulnerabilities, the babyfile module was designed with twenty challenges that teach pwn.college users how to exploit programs by misusing FILE structures. These challenges are sorted by increasing difficulty and the intended solutions utilize all the vulnerabilities discussed in this paper. In addition to introducing users to exploits against FILE structures, babyfile also showcases a novel attack against the virtual function table, which is located at the end of a FILE structure.
ContributorsRatliff, Derek Michael (Author) / Shoshitaishvili, Yan (Thesis advisor) / Wang, Fish (Committee member) / Bao, Tiffany (Committee member) / Arizona State University (Publisher)
Created2023
189404-Thumbnail Image.png
Description
Imitation Learning, also known as Learning from Demonstration (LfD), is a field of study dedicated to aiding an agent's learning process by providing access to expert demonstrations. Within LfD, Movement Primitives is a particular family of algorithms that have been widely studied and implemented in complex robot scenarios. Interaction Primitives,

Imitation Learning, also known as Learning from Demonstration (LfD), is a field of study dedicated to aiding an agent's learning process by providing access to expert demonstrations. Within LfD, Movement Primitives is a particular family of algorithms that have been widely studied and implemented in complex robot scenarios. Interaction Primitives, a subset of Movement Primitives, have demonstrated notable success in learning single interactions between humans and robots. However, literature addressing the extension of these algorithms to support multiple variations of an interaction is limited. This thesis presents a physical human-robot interaction algorithm capable of predicting appropriate robot responses in complex interactions that involve a superposition of multiple interactions. The proposed algorithm, Blending Bayesian Interaction Primitives (B-BIP), achieves responsive motions in complex hugging scenarios and can reciprocate and adapt to the motion and timing of a hug. B-BIP generalizes prior work, where the original formulation reduces to the particular case of a single interaction. The performance of B-BIP is evaluated through an extensive user study and empirical experiments. The proposed algorithm yields significantly better quantitative prediction error and more favorable participant responses concerning accuracy, responsiveness, and timing compared to existing state-of-the-art methods.
ContributorsDrolet, Michael (Author) / Ben Amor, Heni HBA (Thesis advisor) / Lan, Shiwei SL (Thesis advisor) / McCulloch, Robert RM (Committee member) / Arizona State University (Publisher)
Created2023
190777-Thumbnail Image.png
Description
Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context

Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context behind viral content, particularly in the realm of the digital world, introducing a semi-supervised taxonomy induction framework (STIF). STIF employs state-of-the-art term representation, topical phrase detection, and clustering to organize terms into a two-level topic taxonomy. Social scientists then assess the topic clusters for coherence and completeness. STIF proves effective, significantly reducing human coding efforts (up to 74%) while accurately inducing taxonomies and term-to-topic mappings due to the high purity of its topics. Second, to profile the drivers of virality, this study investigates messaging strategies influencing message virality. Three content-based hypotheses are formulated and tested, demonstrating that incorporation of “negativity bias,” “causal arguments,” and “threats to personal or societal core values” - singularly and jointly - significantly enhances message virality on social media, quantified by retweet counts. Furthermore, the study highlights framing narratives’ pivotal role in shaping discourse, particularly in adversarial campaigns. An innovative pipeline for automatic framing detection is introduced, and tested on a collection of texts on the Russia-Ukraine conflict. Integrating representation learning, overlapping graph-clustering, and a unique Topic Actor Graph (TAG) synthesis method, the study achieves remarkable framing detection accuracy. The developed scoring mechanism maps sentences to automatically detect framing signatures. This pipeline attains an impressive F1 score of 92% and a 95% weighted accuracy for framing detection on a real-world dataset. In essence, this dissertation focuses on the multidimensional exploration of information cascade, uncovering the context and drivers of content virality, and automating framing detection. Through innovative methodologies like STIF, messaging strategy analysis, and TAG Frames, the research contributes valuable insights into the mechanics of viral content spread and framing nuances within the digital landscape, enriching fields such as advertisement, communication, public discourse, and crisis response strategies.
ContributorsMousavi, Maryam (Author) / Davulcu, Hasan HD (Thesis advisor) / Li, Baoxin (Committee member) / Corman, Steven (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2023