Matching Items (76)
Filtering by

Clear all filters

158867-Thumbnail Image.png
Description
The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of

The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of the snapshots of the system at regular intervals by performing state estimation using the available measurements from the sensors. The process of state estimation corresponds to the estimation of the complex voltages at all buses of the system. PMU measurements play an important role in this regard, because of the time-synchronized nature of these measurements as well as the faster rates at which they are produced. However, a model-based linear state estimator created using PMU-only data requires complete observability of the system by PMUs for its continuous functioning. The conventional model-based techniques also make certain assumptions in the modeling of the physical system, such as the constant values of the line parameters. The measurement error models in the conventional state estimators are also assumed to follow a Gaussian distribution. In this research, a data mining technique using Deep Neural Networks (DNNs) is proposed for performing a high-speed, time-synchronized state estimation of the transmission system of the power system. The proposed technique uses historical data to identify the correlation between the measurements and the system states as opposed to directly using the physical model of the system. Therefore, the highlight of the proposed technique is its ability to provide an accurate, fast, time-synchronized estimate of the system states even in the absence of complete system observability by PMUs.
The state estimator is formulated for the IEEE 118-bus system and its reliable performance is demonstrated in the presence of redundant observability, complete observability, and incomplete observability. The robustness of the state estimator is also demonstrated by performing the estimation in presence of Non-Gaussian measurement errors and varying line parameters. The consistency of the DNN state estimator is demonstrated by performing state estimation for an entire day.
ContributorsChandrasekaran, Harish (Author) / Pal, Anamitra (Thesis advisor) / Sen, Arunabha (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
156582-Thumbnail Image.png
Description
Distributed systems are prone to attacks, called Sybil attacks, wherein an adversary may generate an unbounded number of bogus identities to gain control over the system. In this thesis, an algorithm, DownhillFlow, for mitigating such attacks is presented and

tested experimentally. The trust rankings produced by the algorithm are significantly better

Distributed systems are prone to attacks, called Sybil attacks, wherein an adversary may generate an unbounded number of bogus identities to gain control over the system. In this thesis, an algorithm, DownhillFlow, for mitigating such attacks is presented and

tested experimentally. The trust rankings produced by the algorithm are significantly better than those of the distributed SybilGuard protocol and only slightly worse than those of the best-known Sybil defense algorithm, ACL. The results obtained for ACL are

consistent with those obtained in previous studies. The running times of the algorithms are also tested and two results are obtained: first, DownhillFlow’s running time is found to be significantly faster than any existing algorithm including ACL, terminating in

slightly over one second on the 300,000-node DBLP graph. This allows it to be used in settings such as dynamic networks as-is with no additional functionality needed. Second, when ACL is configured such that it matches DownhillFlow’s speed, it fails to recognize

large portions of the input graphs and its accuracy among the portion of the graphs it does recognize becomes lower than that of DownhillFlow.
ContributorsBradley, Michael (Author) / Bazzi, Rida (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2018
168504-Thumbnail Image.png
Description
Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required

Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required in many IoT applications. Because of this, new protocols have been introduced to support the integration of these devices. One of these protocols is the increasingly popular routing protocol for low-power and lossy networks (RPL). However, this protocol is well known to attract blackhole and sinkhole attacks and cause serious difficulties when using more computationally intensive techniques to protect against these attacks, such as intrusion detection systems and rank authentication schemes. In this paper, an effective approach is presented to protect RPL networks against blackhole attacks. The approach does not address sinkhole attacks because they cause low damage and are often used along blackhole attacks and can be detected when blackhole attaches are detected. This approach uses the feature of multiple parents per node and a parent evaluation system enabling nodes to select more reliable routes. Simulations have been conducted, compared to existing approaches this approach would provide better protection against blackhole attacks with much lower overheads for small RPL networks.
ContributorsSanders, Kent (Author) / Yau, Stephen S (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2021
157843-Thumbnail Image.png
Description
Social media is a medium that contains rich information which has been shared by many users every second every day. This information can be utilized for various outcomes such as understanding user behaviors, learning the effect of social media on a community, and developing a decision-making system based on the

Social media is a medium that contains rich information which has been shared by many users every second every day. This information can be utilized for various outcomes such as understanding user behaviors, learning the effect of social media on a community, and developing a decision-making system based on the information available. With the growing popularity of social networking sites, people can freely express their opinions and feelings which results in a tremendous amount of user-generated data. The rich amount of social media data has opened the path for researchers to study and understand the users’ behaviors and mental health conditions. Several studies have shown that social media provides a means to capture an individual state of mind. Given the social media data and related work in this field, this work studies the scope of users’ discussion among online mental health communities. In the first part of this dissertation, this work focuses on the role of social media on mental health among sexual abuse community. It employs natural language processing techniques to extract topics of responses, examine how diverse these topics are to answer research questions such as whether responses are limited to emotional support; if not, what other topics are; what the diversity of topics manifests; how online response differs from traditional response found in a physical world. To answer these questions, this work extracts Reddit posts on rape to understand the nature of user responses for this stigmatized topic. In the second part of this dissertation, this work expands to a broader range of online communities. In particular, it investigates the potential roles of social media on mental health among five major communities, i.e., trauma and abuse community, psychosis and anxiety community, compulsive disorders community, coping and therapy community, and mood disorders community. This work studies how people interact with each other in each of these communities and what these online forums provide a resource to users who seek help. To understand users’ behaviors, this work extracts Reddit posts on 52 related subcommunities and analyzes the linguistic behavior of each community. Experiments in this dissertation show that Reddit is a good medium for users with mental health issues to find related helpful resources. Another interesting observation is an interesting topic cluster from users’ posts which shows that discussion and communication among users help individuals to find proper resources for their problem. Moreover, results show that the anonymity of users in Reddit allows them to have discussions about different topics beyond social support such as financial and religious support.
ContributorsKamarudin, Nur Shazwani (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Sen, Arunabha (Committee member) / Hu, Xia (Committee member) / Arizona State University (Publisher)
Created2019
187351-Thumbnail Image.png
Description
Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing

Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing simulators is essential to evaluate their performance and guide the development of future quantum algorithms and hardware. This study presents a systematic evaluation of quantum computing simulators’ performance using a benchmarking suite. The benchmarking suite is designed to meet the industry-standard performance benchmarks established by the Defense Advanced Research Projects Agency (DARPA) and includes standardized test data and comparison metrics that encompass a wide range of applications, deep neural network models, and optimization techniques. The thesis is divided into two parts to cover basic quantum algorithms and variational quantum algorithms for practical machine-learning tasks. In the first part, the run time and memory performance of quantum computing simulators are analyzed using basic quantum algorithms. The performance is evaluated using standardized test data and comparison metrics that cover fundamental quantum algorithms, including Quantum Fourier Transform (QFT), Inverse Quantum Fourier Transform (IQFT), Quantum Adder, and Variational Quantum Eigensolver (VQE). The analysis provides valuable insights into the simulators’ strengths and weaknesses and highlights the need for further development to enhance their performance. In the second part, benchmarks are developed using variational quantum algorithms for practical machine learning tasks such as image classification, natural language processing, and recommendation. The benchmarks address several unique challenges posed by benchmarking quantum machine learning (QML), including the effect of optimizations on time-to-solution, the stochastic nature of training, the inclusion of hybrid quantum-classical layers, and the diversity of software and hardware systems. The findings offer valuable insights into the simulators’ ability to solve practical machine-learning tasks and pinpoint areas for future research and enhancement. In conclusion, this study provides a rigorous evaluation of quantum computing simulators’ performance using a benchmarking suite that meets industry-standard performance benchmarks.
ContributorsSathyakumar, Rajesh (Author) / Spanias, Andreas (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023
187443-Thumbnail Image.png
Description
Water, energy, and food are essential resources to sustain the development of the society. The Food-Energy-Water Nexus (FEW-Nexus) must account for synergies and trade-offs among these resources. The nexus concept highlights the importance of integrative solutions that secure supplies to meet demands sustainably. The existing frameworks and tools do not

Water, energy, and food are essential resources to sustain the development of the society. The Food-Energy-Water Nexus (FEW-Nexus) must account for synergies and trade-offs among these resources. The nexus concept highlights the importance of integrative solutions that secure supplies to meet demands sustainably. The existing frameworks and tools do not focus on formal model composability, a key capability for creating simulations created from separately developed models. The Knowledge Interchange Broker (KIB) approach is used to model the interactions among models to achieve composition flexibility for the FEW-Nexus.Domain experts generally use the Water Evaluation and Planning (WEAP) and Low Emissions Analysis Platform (LEAP) systems to study water and energy systems, respectively. The food part of FEW systems can be modeled inside the WEAP system. An internal linkage mechanism is available for combining and simulating WEAP and LEAP models. This mechanism is used for the validation and performance evaluation of independent modeling and simulation proposed in this research. The Componentized WEAP and LEAP RESTful frameworks are component-based representations for the legacy and closed-source WEAP and LEAP systems. These modularized systems simplify their use with other simulation frameworks. This research proposes two interaction model frameworks based on the Knowledge Interchange Broker approach. First, an Algorithmic Interaction Model (Algorithmic-IM) was developed to integrate the WEAP and LEAP models. The Algorithmic-IM model can be defined via programming language and has a fixed cyclic execution protocol. However, this approach has tightly interwoven the interaction model with its execution and has limited support for flexibly creating model hierarchies. To overcome these restrictions, the system-theoretic Parallel DEVS formalism is used to develop a DEVS-Based Interaction Model (DEVS-IM). As in the Algorithmic-IM, the DEVS-IM is implemented as a RESTful framework, uses MongoDB for defining structural DEVS models, and supports automatic code generation for the DEVSSuite simulator. The DEVS-IM offers modular, hierarchical structural modeling, reusability, flexibility, and maintainability for integrating disparate systems. The Phoenix Active Management Area (AMA) is used to demonstrate the real-world application of the proposed research. Furthermore, the correctness and performance of the presented frameworks in this research are evaluated using the Phoenix-AMA model.
ContributorsFard, Mostafa D (Author) / Sarjoughian, Hessam S (Thesis advisor) / Barton, Michael (Committee member) / Sen, Arunabha (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2023
171925-Thumbnail Image.png
Description
The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these sensors should be placed (deployed) in the network. Prior works

The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these sensors should be placed (deployed) in the network. Prior works have utilized the well known Set Cover formulation in order to determine the locations where sensors should be placed in the network, so that anomalies can be effectively detected. However, such works cannot be utilized to address the problem when the objective is to not only detect the presence of anomalies, but also to detect (distinguish) the source(s) of the detected anomalies, i.e., uniquely monitoring the network. In this dissertation, I attempt to fill in this gap by utilizing the mathematical concept of Identifying Codes and illustrating how it not only can overcome the aforementioned limitation, but also it, and its variants, can be utilized to monitor complex networks modeled from multiple domains. Over the course of this dissertation, I make key contributions which further enhance the efficacy and applicability of Identifying Codes as a monitoring strategy. First, I show how Identifying Codes are superior to not only the Set Cover formulation but also standard graph centrality metrics, for the purpose of uniquely monitoring complex networks. Second, I study novel problems such as the budget constrained Identifying Code, scalable Identifying Code, robust Identifying Code etc., and present algorithms and results for the respective problems. Third, I present useful Identifying Code results for restricted graph classes such as Unit Interval Bigraphs and Unit Disc Bigraphs. Finally, I show the universality of Identifying Codes by applying it to multiple domains.
ContributorsBasu, Kaustav (Author) / Sen, Arunabha (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2022
171460-Thumbnail Image.png
Description
Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are

Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are usually found using heuristic methods. This dissertation contributes to heuristics for ARP, with a focus on the Capacitated Arc Routing Problem (CARP) with additional constraints. In operations such as residential waste collection, vehicle breakdown disruptions occur frequently. A new variant Capacitated Arc Re-routing Problem for Vehicle Break-down (CARP-VB) is introduced to address the need to re-route using only remaining vehicles to avoid missing services. A new heuristic Probe is developed to solve CARP-VB. Experiments on benchmark instances show that Probe is better in reducing the makespan and hence effective in reducing delays and avoiding missing services. In addition to total cost, operators are also interested in solutions that are attractive, that is, routes that are contiguous, compact, and non-overlapping to manage the work. Operators may not adopt a solution that is not attractive even if it is optimum. They are also interested in solutions that are balanced in workload to meet equity requirements. A new multi-objective memetic algorithm, MA-ABC is developed, that optimizes three objectives: Attractiveness, makespan, and total cost. On testing with benchmark instances, MA-ABC was found to be effective in providing attractive and balanced route solutions without affecting the total cost. Changes in the problem specification such as demand and topology occurs frequently in business operations. Machine learning be applied to learn the distribution behind these changes and generate solutions quickly at time of inference. Splice is a machine learning framework for CARP that generates closer to optimum solutions quickly using a graph neural network and deep Q-learning. Splice can solve several variants of node and arc routing problems using the same architecture without any modification. Splice was trained and tested using randomly generated instances. Splice generated solutions faster that are also better in comparison to popular metaheuristics.
ContributorsRamamoorthy, Muhilan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Forrest, Stephanie (Committee member) / Mirchandani, Pitu (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2022
154329-Thumbnail Image.png
Description
The presence of a rich set of embedded sensors on mobile devices has been fuelling various sensing applications regarding the activities of individuals and their surrounding environment, and these ubiquitous sensing-capable mobile devices are pushing the new paradigm of Mobile Crowd Sensing (MCS) from concept to reality. MCS aims to

The presence of a rich set of embedded sensors on mobile devices has been fuelling various sensing applications regarding the activities of individuals and their surrounding environment, and these ubiquitous sensing-capable mobile devices are pushing the new paradigm of Mobile Crowd Sensing (MCS) from concept to reality. MCS aims to outsource sensing data collection to mobile users and it could revolutionize the traditional ways of sensing data collection and processing. In the meantime, cloud computing provides cloud-backed infrastructures for mobile devices to provision their capabilities with network access. With enormous computational and storage resources along with sufficient bandwidth, it functions as the hub to handle the sensing service requests from sensing service consumers and coordinate sensing task assignment among eligible mobile users to reach a desired quality of sensing service. This paper studies the problem of sensing task assignment to mobile device owners with specific spatio-temporal traits to minimize the cost and maximize the utility in MCS while adhering to QoS constraints. Greedy approaches and hybrid solutions combined with bee algorithms are explored to address the problem.

Moreover, the privacy concerns arise with the widespread deployment of MCS from both the data contributors and the sensing service consumers. The uploaded sensing data, especially those tagged with spatio-temporal information, will disclose the personal information of the data contributors. In addition, the sensing service requests can reveal the personal interests of service consumers. To address the privacy issues, this paper constructs a new framework named Privacy-Preserving Mobile Crowd Sensing (PP-MCS) to leverage the sensing capabilities of ubiquitous mobile devices and cloud infrastructures. PP-MCS has a distributed architecture without relying on trusted third parties for privacy-preservation. In PP-MCS, the sensing service consumers can retrieve data without revealing the real data contributors. Besides, the individual sensing records can be compared against the aggregation result while keeping the values of sensing records unknown, and the k-nearest neighbors could be approximately identified without privacy leaks. As such, the privacy of the data contributors and the sensing service consumers can be protected to the greatest extent possible.
ContributorsWang, Zhijie (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2016
157810-Thumbnail Image.png
Description
Millions of users leave digital traces of their political engagements on social media platforms every day. Users form networks of interactions, produce textual content, like and share each others' content. This creates an invaluable opportunity to better understand the political engagements of internet users. In this proposal, I present three

Millions of users leave digital traces of their political engagements on social media platforms every day. Users form networks of interactions, produce textual content, like and share each others' content. This creates an invaluable opportunity to better understand the political engagements of internet users. In this proposal, I present three algorithmic solutions to three facets of online political networks; namely, detection of communities, antagonisms and the impact of certain types of accounts on political polarization. First, I develop a multi-view community detection algorithm to find politically pure communities. I find that word usage among other content types (i.e. hashtags, URLs) complement user interactions the best in accurately detecting communities.

Second, I focus on detecting negative linkages between politically motivated social media users. Major social media platforms do not facilitate their users with built-in negative interaction options. However, many political network analysis tasks rely on not only positive but also negative linkages. Here, I present the SocLSFact framework to detect negative linkages among social media users. It utilizes three pieces of information; sentiment cues of textual interactions, positive interactions, and socially balanced triads. I evaluate the contribution of each three aspects in negative link detection performance on multiple tasks.

Third, I propose an experimental setup that quantifies the polarization impact of automated accounts on Twitter retweet networks. I focus on a dataset of tragic Parkland shooting event and its aftermath. I show that when automated accounts are removed from the retweet network the network polarization decrease significantly, while a same number of accounts to the automated accounts are removed randomly the difference is not significant. I also find that prominent predictors of engagement of automatically generated content is not very different than what previous studies point out in general engaging content on social media. Last but not least, I identify accounts which self-disclose their automated nature in their profile by using expressions such as bot, chat-bot, or robot. I find that human engagement to self-disclosing accounts compared to non-disclosing automated accounts is much smaller. This observational finding can motivate further efforts into automated account detection research to prevent their unintended impact.
ContributorsOzer, Mert (Author) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Sen, Arunabha (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019