Matching Items (14)
149668-Thumbnail Image.png
Description
Service based software (SBS) systems are software systems consisting of services based on the service oriented architecture (SOA). Each service in SBS systems provides partial functionalities and collaborates with other services as workflows to provide the functionalities required by the systems. These services may be developed and/or owned by different

Service based software (SBS) systems are software systems consisting of services based on the service oriented architecture (SOA). Each service in SBS systems provides partial functionalities and collaborates with other services as workflows to provide the functionalities required by the systems. These services may be developed and/or owned by different entities and physically distributed across the Internet. Compared with traditional software system components which are usually specifically designed for the target systems and bound tightly, the interfaces of services and their communication protocols are standardized, which allow SBS systems to support late binding, provide better interoperability, better flexibility in dynamic business logics, and higher fault tolerance. The development process of SBS systems can be divided to three major phases: 1) SBS specification, 2) service discovery and matching, and 3) service composition and workflow execution. This dissertation focuses on the second phase, and presents a privacy preserving service discovery and ranking approach for multiple user QoS requirements. This approach helps service providers to register services and service users to search services through public, but untrusted service directories with the protection of their privacy against the service directories. The service directories can match the registered services with service requests, but do not learn any information about them. Our approach also enforces access control on services during the matching process, which prevents unauthorized users from discovering services. After the service directories match a set of services that satisfy the service users' functionality requirements, the service discovery approach presented in this dissertation further considers service users' QoS requirements in two steps. First, this approach optimizes services' QoS by making tradeoff among various QoS aspects with users' QoS requirements and preferences. Second, this approach ranks services based on how well they satisfy users' QoS requirements to help service users select the most suitable service to develop their SBSs.
ContributorsYin, Yin (Author) / Yau, Stephen S. (Thesis advisor) / Candan, Kasim (Committee member) / Dasgupta, Partha (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2011
151407-Thumbnail Image.png
Description
Recommender systems are a type of information filtering system that suggests items that may be of interest to a user. Most information retrieval systems have an overwhelmingly large number of entries. Most users would experience information overload if they were forced to explore the full set of results. The goal

Recommender systems are a type of information filtering system that suggests items that may be of interest to a user. Most information retrieval systems have an overwhelmingly large number of entries. Most users would experience information overload if they were forced to explore the full set of results. The goal of recommender systems is to overcome this limitation by predicting how users will value certain items and returning the items that should be of the highest interest to the user. Most recommender systems collect explicit user feedback, such as a rating, and attempt to optimize their model to this rating value. However, there is potential for a system to collect implicit user feedback, such as user purchases and clicks, to learn user preferences. Additionally with implicit user feedback, it is possible for the system to remember the context of user feedback in terms of which other items a user was considering when making their decisions. When considering implicit user feedback, only a subset of all evaluation techniques can be used. Currently, sufficient evaluation techniques for evaluating implicit user feedback do not exist. In this thesis, I introduce a new model for recommendation that borrows the idea of opportunity cost from economics. There are two variations of the model, one considering context and one that does not. Additionally, I propose a new evaluation measure that works specifically for the case of implicit user feedback.
ContributorsAckerman, Brian (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2012
134154-Thumbnail Image.png
Description
The need for automated / computational fact checking has grown substantially in recent times due to the high volume of false information and limited workforce of human fact checkers. This need has spawned research and new developments in this field and has created many different systems and approaches to this

The need for automated / computational fact checking has grown substantially in recent times due to the high volume of false information and limited workforce of human fact checkers. This need has spawned research and new developments in this field and has created many different systems and approaches to this complex problem. This paper attempts to not just explain the most popular methods that are currently being used, but provide experimental results of the comparison of two different systems, the replication of results from their respective papers, and an annotated data-set of different test sentences to be used in these systems.
ContributorsRosenkilde, Trevor Curtis (Author) / Papotti, Paolo (Thesis director) / Candan, Kasim (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
171864-Thumbnail Image.png
Description
Bitcoin (BTC) shares many characteristics with traditional stocks, but it is much more volatile since the cryptocurrency market is unregulated. The high volatility makes BTC a very high risk-high reward investment and predictive analysis can be very useful to obtain good returns and minimize risk. Taking Cocco et al. [1]

Bitcoin (BTC) shares many characteristics with traditional stocks, but it is much more volatile since the cryptocurrency market is unregulated. The high volatility makes BTC a very high risk-high reward investment and predictive analysis can be very useful to obtain good returns and minimize risk. Taking Cocco et al. [1] as the primary reference, this thesis tries to reproduce their findings by building two BTC price forecasting models, Long Short-Term Memory (LSTM) and Bayesian Neural Network (BNN), and finding that the Mean Absolute Percentage Error (MAPE) is lower for the initial BNN model in comparison to the initial LSTM model. In addition to forecasting the value of BTC, a metric called trend% is developed to gauge the models’ ability to capture the trend of how the price varies from one timestep to the next and used to compare the trend prediction performance. It is found that both initial models make random predictions for the trend. Improvements like removing the stochastic component from the data and forecasting returns as opposed to price values show that both models show comparable performance in terms of both MAPE and trend%. The thesis concludes by discussing the future work that can be done to potentially improve the above models. One of the possibilities mentioned is to use on-chain data from the BTC blockchain coupled with the real-world knowledge of BTC exchanges and feed this as input features to the models.
ContributorsMittal, Shivansh (Author) / Boscovic, Dragan (Thesis advisor) / Davulcu, Hasan (Committee member) / Candan, Kasim (Committee member) / Arizona State University (Publisher)
Created2022
156602-Thumbnail Image.png
Description
The goal of fact checking is to determine if a given claim holds. A promising ap- proach for this task is to exploit reference information in the form of knowledge graphs (KGs), a structured and formal representation of knowledge with semantic descriptions of entities and relations. KGs are successfully used

The goal of fact checking is to determine if a given claim holds. A promising ap- proach for this task is to exploit reference information in the form of knowledge graphs (KGs), a structured and formal representation of knowledge with semantic descriptions of entities and relations. KGs are successfully used in multiple appli- cations, but the information stored in a KG is inevitably incomplete. In order to address the incompleteness problem, this thesis proposes a new method built on top of recent results in logical rule discovery in KGs called RuDik and a probabilistic extension of answer set programs called LPMLN.

This thesis presents the integration of RuDik which discovers logical rules over a given KG and LPMLN to do probabilistic inference to validate a fact. While automatically discovered rules over a KG are for human selection and revision, they can be turned into LPMLN programs with a minor modification. Leveraging the probabilistic inference in LPMLN, it is possible to (i) derive new information which is not explicitly stored in a KG with a probability associated with it, and (ii) provide supporting facts and rules for interpretable explanations for such decisions.

Also, this thesis presents experiments and results to show that this approach can label claims with high precision. The evaluation of the system also sheds light on the role played by the quality of the given rules and the quality of the KG.
ContributorsPradhan, Anish (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Papotti, Paolo (Committee member) / Arizona State University (Publisher)
Created2018
157602-Thumbnail Image.png
Description
Reasoning with commonsense knowledge is an integral component of human behavior. It is due to this capability that people know that a weak person may not be able to lift someone. It has been a long standing goal of the Artificial Intelligence community to simulate such commonsense reasoning abilities in

Reasoning with commonsense knowledge is an integral component of human behavior. It is due to this capability that people know that a weak person may not be able to lift someone. It has been a long standing goal of the Artificial Intelligence community to simulate such commonsense reasoning abilities in machines. Over the years, many advances have been made and various challenges have been proposed to test their abilities. The Winograd Schema Challenge (WSC) is one such Natural Language Understanding (NLU) task which was also proposed as an alternative to the Turing Test. It is made up of textual question answering problems which require resolution of a pronoun to its correct antecedent.

In this thesis, two approaches of developing NLU systems to solve the Winograd Schema Challenge are demonstrated. To this end, a semantic parser is presented, various kinds of commonsense knowledge are identified, techniques to extract commonsense knowledge are developed and two commonsense reasoning algorithms are presented. The usefulness of the developed tools and techniques is shown by applying them to solve the challenge.
ContributorsSharma, Arpita (Author) / Baral, Chitta (Thesis advisor) / Lee, Joohyung (Committee member) / Papotti, Paolo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
154864-Thumbnail Image.png
Description
Social media has become popular in the past decade. Facebook for example has 1.59 billion active users monthly. With such massive social networks generating lot of data, everyone is constantly looking for ways of leveraging the knowledge from social networks to make their systems more personalized to their end users.

Social media has become popular in the past decade. Facebook for example has 1.59 billion active users monthly. With such massive social networks generating lot of data, everyone is constantly looking for ways of leveraging the knowledge from social networks to make their systems more personalized to their end users. And with rapid increase in the usage of mobile phones and wearables, social media data is being tied to spatial networks. This research document proposes an efficient technique that answers socially k-Nearest Neighbors with Spatial Range Filter. The proposed approach performs a joint search on both the social and spatial domains which radically improves the performance compared to straight forward solutions. The research document proposes a novel index that combines social and spatial indexes. In other words, graph data is stored in an organized manner to filter it based on spatial (region of interest) and social constraints (top-k closest vertices) at query time. That leads to pruning necessary paths during the social graph traversal procedure, and only returns the top-K social close venues. The research document then experimentally proves how the proposed approach outperforms existing baseline approaches by at least three times and also compare how each of our algorithms perform under various conditions on a real geo-social dataset extracted from Yelp.
ContributorsPasumarthy, Nitin (Author) / Sarwat, Mohamed (Thesis advisor) / Papotti, Paolo (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2016
153583-Thumbnail Image.png
Description
When scientific software is written to specify processes, it takes the form of a workflow, and is often written in an ad-hoc manner in a dynamic programming language. There is a proliferation of legacy workflows implemented by non-expert programmers due to the accessibility of dynamic languages. Unfortunately, ad-hoc workflows lack

When scientific software is written to specify processes, it takes the form of a workflow, and is often written in an ad-hoc manner in a dynamic programming language. There is a proliferation of legacy workflows implemented by non-expert programmers due to the accessibility of dynamic languages. Unfortunately, ad-hoc workflows lack a structured description as provided by specialized management systems, making ad-hoc workflow maintenance and reuse difficult, and motivating the need for analysis methods. The analysis of ad-hoc workflows using compiler techniques does not address dynamic languages - a program has so few constrains that its behavior cannot be predicted. In contrast, workflow provenance tracking has had success using run-time techniques to record data. The aim of this work is to develop a new analysis method for extracting workflow structure at run-time, thus avoiding issues with dynamics.

The method captures the dataflow of an ad-hoc workflow through its execution and abstracts it with a process for simplifying repetition. An instrumentation system first processes the workflow to produce an instrumented version, capable of logging events, which is then executed on an input to produce a trace. The trace undergoes dataflow construction to produce a provenance graph. The dataflow is examined for equivalent regions, which are collected into a single unit. The workflow is thus characterized in terms of its treatment of an input. Unlike other methods, a run-time approach characterizes the workflow's actual behavior; including elements which static analysis cannot predict (for example, code dynamically evaluated based on input parameters). This also enables the characterization of dataflow through external tools.

The contributions of this work are: a run-time method for recording a provenance graph from an ad-hoc Python workflow, and a method to analyze the structure of a workflow from provenance. Methods are implemented in Python and are demonstrated on real world Python workflows. These contributions enable users to derive graph structure from workflows. Empowered by a graphical view, users can better understand a legacy workflow. This makes the wealth of legacy ad-hoc workflows accessible, enabling workflow reuse instead of investing time and resources into creating a workflow.
ContributorsAcűna, Ruben (Author) / Bazzi, Rida (Thesis advisor) / Lacroix, Zoé (Thesis advisor) / Candan, Kasim (Committee member) / Arizona State University (Publisher)
Created2015
153404-Thumbnail Image.png
Description
Splicing of digital images is a powerful form of tampering which transports regions of an image to create a composite image. When used as an artistic tool, this practice is harmless but when these composite images can be used to create political associations or are submitted as evidence in the

Splicing of digital images is a powerful form of tampering which transports regions of an image to create a composite image. When used as an artistic tool, this practice is harmless but when these composite images can be used to create political associations or are submitted as evidence in the judicial system they become more impactful. In these cases, distinction between an authentic image and a tampered image can become important.

Many proposed approaches to image splicing detection follow the model of extracting features from an authentic and tampered dataset and then classifying them using machine learning with the goal of optimizing classification accuracy. This thesis approaches splicing detection from a slightly different perspective by choosing a modern splicing detection framework and examining a variety of preprocessing techniques along with their effect on classification accuracy. Preprocessing techniques explored include Joint Picture Experts Group (JPEG) file type block line blurring, image level blurring, and image level sharpening. Attention is also paid to preprocessing images adaptively based on the amount of higher frequency content they contain.

This thesis also recognizes an identified problem with using a popular tampering evaluation dataset where a mismatch in the number of JPEG processing iterations between the authentic and tampered set creates an unfair statistical bias, leading to higher detection rates. Many modern approaches do not acknowledge this issue but this thesis applies a quality factor equalization technique to reduce this bias. Additionally, this thesis artificially inserts a mismatch in JPEG processing iterations by varying amounts to determine its effect on detection rates.
ContributorsGubrud, Aaron (Author) / Li, Baoxin (Thesis advisor) / Candan, Kasim (Committee member) / Kadi, Zafer (Committee member) / Arizona State University (Publisher)
Created2015
155912-Thumbnail Image.png
Description
Online social networks are the hubs of social activity in cyberspace, and using them to exchange knowledge, experiences, and opinions is common. In this work, an advanced topic modeling framework is designed to analyse complex longitudinal health information from social media with minimal human annotation, and Adverse Drug Events and

Online social networks are the hubs of social activity in cyberspace, and using them to exchange knowledge, experiences, and opinions is common. In this work, an advanced topic modeling framework is designed to analyse complex longitudinal health information from social media with minimal human annotation, and Adverse Drug Events and Reaction (ADR) information is extracted and automatically processed by using a biased topic modeling method. This framework improves and extends existing topic modelling algorithms that incorporate background knowledge. Using this approach, background knowledge such as ADR terms and other biomedical knowledge can be incorporated during the text mining process, with scores which indicate the presence of ADR being generated. A case control study has been performed on a data set of twitter timelines of women that announced their pregnancy, the goals of the study is to compare the ADR risk of medication usage from each medication category during the pregnancy.

In addition, to evaluate the prediction power of this approach, another important aspect of personalized medicine was addressed: the prediction of medication usage through the identification of risk groups. During the prediction process, the health information from Twitter timeline, such as diseases, symptoms, treatments, effects, and etc., is summarized by the topic modelling processes and the summarization results is used for prediction. Dimension reduction and topic similarity measurement are integrated into this framework for timeline classification and prediction. This work could be applied to provide guidelines for FDA drug risk categories. Currently, this process is done based on laboratory results and reported cases.

Finally, a multi-dimensional text data warehouse (MTD) to manage the output from the topic modelling is proposed. Some attempts have been also made to incorporate topic structure (ontology) and the MTD hierarchy. Results demonstrate that proposed methods show promise and this system represents a low-cost approach for drug safety early warning.
ContributorsYang, Jian (Author) / Gonzalez, Graciela (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Papotti, Paolo (Committee member) / Arizona State University (Publisher)
Created2017