Matching Items (1,087)
Filtering by

Clear all filters

157892-Thumbnail Image.png
Description
Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction

Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction confidence. Works on defending against adversarial examples can be broadly classified as correcting or detecting, which aim, respectively at negating the effects of the attack and correctly classifying the input, or detecting and rejecting the input as adversarial. In this work, a new approach for detecting adversarial examples is proposed. The approach takes advantage of the robustness of natural images to noise. As noise is added to a natural image, the prediction probability of its true class drops, but the drop is not sudden or precipitous. The same seems to not hold for adversarial examples. In other word, the stress response profile for natural images seems different from that of adversarial examples, which could be detected by their stress response profile. An evaluation of this approach for detecting adversarial examples is performed on the MNIST, CIFAR-10 and ImageNet datasets. Experimental data shows that this approach is effective at detecting some adversarial examples on small scaled simple content images and with little sacrifice on benign accuracy.
ContributorsSun, Lin (Author) / Bazzi, Rida (Thesis advisor) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2019
157902-Thumbnail Image.png
Description
Social networking sites like Twitter have provided people a platform to connect

with each other, to discuss and share information and news or to entertain themselves. As the number of users continues to grow there has been explosive growth in the data generated by these users. Such a vast data source

Social networking sites like Twitter have provided people a platform to connect

with each other, to discuss and share information and news or to entertain themselves. As the number of users continues to grow there has been explosive growth in the data generated by these users. Such a vast data source has provided researchers a way to study and monitor public health.

Accurately analyzing tweets is a difficult task mainly because of their short length, the inventive spellings and creative language expressions. Instead of focusing at the topic level, identifying tweets that have personal health experience mentions would be more helpful to researchers, governments and other organizations. Another important limitation in the current systems for social media health applications is the use of a disease-specific model and dataset to study a particular disease. Identifying adverse drug reactions is an important part of the drug development process. Detecting and extracting adverse drug mentions in tweets can supplement the list of adverse drug reactions that result from the drug trials and can help in the improvement of the drugs.

This thesis aims to address these two challenges and proposes three systems. A generalizable system to identify personal health experience mentions across different disease domains, a system for automatic classifications of adverse effects mentions in tweets and a system to extract adverse drug mentions from tweets. The proposed systems use the transfer learning from language models to achieve notable scores on Social Media Mining for Health Applications(SMM4H) 2019 (Weissenbacher et al. 2019) shared tasks.
ContributorsGondane, Shubham Bhagwan (Author) / Baral, Chitta (Thesis advisor) / Anwar, Saadat (Committee member) / Devarakonda, Murthy (Committee member) / Arizona State University (Publisher)
Created2019
157904-Thumbnail Image.png
Description
TolTEC is a three-color millimeter wavelength camera currently being developed for the Large Millimeter Telescope (LMT) in Mexico. Synthesizing data from previous astronomy cameras as well as knowledge of atmospheric physics, I have developed a simulation of the data collection of TolTEC on the LMT. The simulation was built off

TolTEC is a three-color millimeter wavelength camera currently being developed for the Large Millimeter Telescope (LMT) in Mexico. Synthesizing data from previous astronomy cameras as well as knowledge of atmospheric physics, I have developed a simulation of the data collection of TolTEC on the LMT. The simulation was built off smaller sub-projects that informed the development with an understanding of the detector array, the time streams for astronomical mapping, and the science behind Lumped Element Kinetic Inductance Detectors (LEKIDs). Additionally, key aspects of software development processes were integrated into the scientific development process to streamline collaboration across multiple universities and plan for integration on the servers at LMT. The work I have done benefits the data reduction pipeline team by enabling them to efficiently develop their software and test it on simulated data.
ContributorsHorton, Paul (Author) / Mauskopf, Philip (Thesis advisor) / Bansal, Ajay (Thesis advisor) / Sandy, Douglas (Committee member) / Arizona State University (Publisher)
Created2019
157565-Thumbnail Image.png
Description
Mobile health (mHealth) applications (apps) hold tremendous potential for addressing chronic health conditions. Smartphones are now the most popular form of computing, and the ubiquitous “always with us, always on” nature of mobile technology makes them amenable to interventions aimed and managing chronic disease. Several challenges exist, however, such as

Mobile health (mHealth) applications (apps) hold tremendous potential for addressing chronic health conditions. Smartphones are now the most popular form of computing, and the ubiquitous “always with us, always on” nature of mobile technology makes them amenable to interventions aimed and managing chronic disease. Several challenges exist, however, such as the difficulty in determining mHealth effects due to the rapidly changing nature of the technology and the challenges presented to existing methods of evaluation, and the ability to ensure end users consistently use the technology in order to achieve the desired effects. The latter challenge is in adherence, defined as the extent to which a patient conducts the activities defined in a clinical protocol (i.e. an intervention plan). Further, higher levels of adherence should lead to greater effects of the intervention (the greater fidelity to the protocol, the more benefit one should receive from the protocol). mHealth has limitations in these areas; the ability to have patients sustainably adhere to a protocol, and the ability to drive intervention effect sizes. My research considers personalized interventions, a new approach of study in the mHealth community, as a potential remedy to these limitations. Specifically, in the context of a pediatric preventative anxiety protocol, I introduce algorithms to drive greater levels of adherence and greater effect sizes by incorporating per-patient (personalized) information. These algorithms have been implemented within an existing mHealth app for middle school that has been successfully deployed in a school in the Phoenix Arizona metropolitan area. The number of users is small (n=3) so a case-by-case analysis of app usage is presented. In addition simulated user behaviors based on models of adherence and effects sizes over time are presented as a means to demonstrate the potential impact of personalized deployments on a larger scale.
ContributorsSingal, Vishakha (Author) / Gary, Kevin (Thesis advisor) / Pina, Armando (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2019
157864-Thumbnail Image.png
Description
Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and

Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and Robotics Programming Language Environment (VIPLE). VIPLE is based on computational thinking and flowchart, which reduces the needs of memorization of detailed syntax in text-based programming languages. VIPLE has been used at Arizona State University (ASU) in multiple years and sections of FSE100 as well as in universities worldwide. Another major issue with teaching large programming classes is the potential lack of qualified teaching assistants to grade and offer insight to a student’s programs at a level beyond output analysis.

In this dissertation, I propose a novel framework for performing semantic autograding, which analyzes student programs at a semantic level to help students learn with additional and systematic help. A general autograder is not practical for general programming languages, due to the flexibility of semantics. A practical autograder is possible in VIPLE, because of its simplified syntax and restricted options of semantics. The design of this autograder is based on the concept of theorem provers. To achieve this goal, I employ a modified version of Pi-Calculus to represent VIPLE programs and Hoare Logic to formalize program requirements. By building on the inference rules of Pi-Calculus and Hoare Logic, I am able to construct a theorem prover that can perform automated semantic analysis. Furthermore, building on this theorem prover enables me to develop a self-learning algorithm that can learn the conditions for a program’s correctness according to a given solution program.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis advisor) / Liu, Huan (Thesis advisor) / Hsiao, Sharon (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2020
157869-Thumbnail Image.png
Description
Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus

Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus algorithm.

In this research, the potential of blockchain technology to manage energy transactions is examined. The energy production landscape is being reshaped by distributed energy resources (DERs): photo-voltaic panels, electric vehicles, smart appliances, and battery storage. Distributed energy sources such as microgrids, household solar installations, community solar installations, and plug-in hybrid vehicles enable energy consumers to act as providers of energy themselves, hence acting as 'prosumers' of energy.

Blockchain Technology facilitates managing the transactions between involved prosumers using 'Smart Contracts' by tokenizing energy into assets. Better utilization of grid assets lowers costs and also presents the opportunity to buy energy at a reasonable price while staying connected with the utility company. This technology acts as a backbone for 2 models applicable to transactional energy marketplace viz. 'Real-Time Energy Marketplace' and 'Energy Futures'. In the first model, the prosumers are given a choice to bid for a price for energy within a stipulated period of time, while the Utility Company acts as an operating entity. In the second model, the marketplace is more liberal, where the utility company is not involved as an operator. The Utility company facilitates infrastructure and manages accounts for all users, but does not endorse or govern transactions related to energy bidding. These smart contracts are not time bounded and can be suspended by the utility during periods of network instability.
ContributorsSadaye, Raj Anil (Author) / Candan, Kasim S (Thesis advisor) / Boscovic, Dragan (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2019
157873-Thumbnail Image.png
Description
With the steady advancement of neural network research, new applications are continuously emerging. As a tool for test time reduction, neural networks provide a reliable method of identifying and applying correlations in datasets to speed data processing. By leveraging the power of a deep neural net, it is possible to

With the steady advancement of neural network research, new applications are continuously emerging. As a tool for test time reduction, neural networks provide a reliable method of identifying and applying correlations in datasets to speed data processing. By leveraging the power of a deep neural net, it is possible to record the motion of an accelerometer in response to an electrical stimulus and correlate the response with a trim code to reduce the total test time for such sensors. This reduction can be achieved by replacing traditional trimming methods such as physical shaking or mathematical models with a neural net that is able to process raw sensor data collected with the help of a microcontroller. With enough data, the neural net can process the raw responses in real time to predict the correct trim codes without requiring any additional information. Though not yet a complete replacement, the method shows promise given more extensive datasets and industry-level testing and has the potential to disrupt the current state of testing.
ContributorsDebeurre, Nicholas (Author) / Ozev, Sule (Thesis advisor) / Vrudhula, Sarma (Thesis advisor) / Kniffin, Margaret (Committee member) / Arizona State University (Publisher)
Created2019
Description
Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove

Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove these accounts. Despite this acknowledged issue, bot presence continues to remain on social media networks. So, it has now become necessary to monitor different bots over time to identify changes in their activities or domain. Since monitoring individual accounts is not feasible, because the bots may get suspended or deleted, bots should be observed in smaller groups, based on their characteristics, as types. Yet, most of the existing research on social media bot detection is focused on labeling bot accounts by only distinguishing them from human accounts and may ignore differences between individual bot accounts. The consideration of these bots' types may be the best solution for researchers and social media companies alike as it is in both of their best interests to study these types separately. However, up until this point, bot categorization has only been theorized or done manually. Thus, the goal of this research is to automate this process of grouping bots by their respective types. To accomplish this goal, the author experimentally demonstrates that it is possible to use unsupervised machine learning to categorize bots into types based on the proposed typology by creating an aggregated dataset, subsequent to determining that the accounts within are bots, and utilizing an existing typology for bots. Having the ability to differentiate between types of bots automatically will allow social media experts to analyze bot activity, from a new perspective, on a more granular level. This way, researchers can identify patterns related to a given bot type's behaviors over time and determine if certain detection methods are more viable for that type.
ContributorsDavis, Matthew William (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Morstatter, Fred (Committee member) / Arizona State University (Publisher)
Created2019
157831-Thumbnail Image.png
Description
Social media has become a primary platform for real-time information sharing among users. News on social media spreads faster than traditional outlets and millions of users turn to this platform to receive the latest updates on major events especially disasters. Social media bridges the gap between the people who are

Social media has become a primary platform for real-time information sharing among users. News on social media spreads faster than traditional outlets and millions of users turn to this platform to receive the latest updates on major events especially disasters. Social media bridges the gap between the people who are affected by disasters, volunteers who offer contributions, and first responders. On the other hand, social media is a fertile ground for malicious users who purposefully disturb the relief processes facilitated on social media. These malicious users take advantage of social bots to overrun social media posts with fake images, rumors, and false information. This process causes distress and prevents actionable information from reaching the affected people. Social bots are automated accounts that are controlled by a malicious user and these bots have become prevalent on social media in recent years.

In spite of existing efforts towards understanding and removing bots on social media, there are at least two drawbacks associated with the current bot detection algorithms: general-purpose bot detection methods are designed to be conservative and not label a user as a bot unless the algorithm is highly confident and they overlook the effect of users who are manipulated by bots and (unintentionally) spread their content. This study is trifold. First, I design a Machine Learning model that uses content and context of social media posts to detect actionable ones among them; it specifically focuses on tweets in which people ask for help after major disasters. Second, I focus on bots who can be a facilitator of malicious content spreading during disasters. I propose two methods for detecting bots on social media with a focus on the recall of the detection. Third, I study the characteristics of users who spread the content of malicious actors. These features have the potential to improve methods that detect malicious content such as fake news.
ContributorsHossein Nazer, Tahora (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Maciejewski, Ross (Committee member) / Akoglu, Leman (Committee member) / Arizona State University (Publisher)
Created2019
157833-Thumbnail Image.png
Description
Live streaming has risen to significant popularity in the recent past and largely this live streaming is a feature of existing social networks like Facebook, Instagram, and Snapchat. However, there does exist at least one social network entirely devoted to live streaming, and specifically the live streaming of video games,

Live streaming has risen to significant popularity in the recent past and largely this live streaming is a feature of existing social networks like Facebook, Instagram, and Snapchat. However, there does exist at least one social network entirely devoted to live streaming, and specifically the live streaming of video games, Twitch. This social network is unique for a number of reasons, not least because of its hyper-focus on live content and this uniqueness has challenges for social media researchers.

Despite this uniqueness, almost no scientific work has been performed on this public social network. Thus, it is unclear what user interaction features present on other social networks exist on Twitch. Investigating the interactions between users and identifying which, if any, of the common user behaviors on social network exist on Twitch is an important step in understanding how Twitch fits in to the social media ecosystem. For example, there are users that have large followings on Twitch and amass a large number of viewers, but do those users exert influence over the behavior of other user the way that popular users on Twitter do?

This task, however, will not be trivial. The same hyper-focus on live content that makes Twitch unique in the social network space invalidates many of the traditional approaches to social network analysis. Thus, new algorithms and techniques must be developed in order to tap this data source. In this thesis, a novel algorithm for finding games whose releases have made a significant impact on the network is described as well as a novel algorithm for detecting and identifying influential players of games. In addition, the Twitch network is described in detail along with the data that was collected in order to power the two previously described algorithms.
ContributorsJones, Isaac (Author) / Liu, Huan (Thesis advisor) / Maciejewski, Ross (Committee member) / Shakarian, Paulo (Committee member) / Agarwal, Nitin (Committee member) / Arizona State University (Publisher)
Created2019