Matching Items (68)
153213-Thumbnail Image.png
Description
The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the

The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the volume of RDF data grew to exponential proportions, the limitations of these systems became apparent and researchers began to focus on using big data analysis tools, most notably Hadoop, to process RDF data. Various studies and benchmarks that evaluate these tools for RDF data processing have been published. In the past two and half years, however, heavy users of big data systems, like Facebook, noted limitations with the query performance of these big data systems and began to develop new distributed query engines for big data that do not rely on map-reduce. Facebook's Presto is one such example.

This thesis deals with evaluating the performance of Presto in processing big RDF data against Apache Hive. A comparative analysis was also conducted against 4store, a native RDF store. To evaluate the performance Presto for big RDF data processing, a map-reduce program and a compiler, based on Flex and Bison, were implemented. The map-reduce program loads RDF data into HDFS while the compiler translates SPARQL queries into a subset of SQL that Presto (and Hive) can understand. The evaluation was done on four and eight node Linux clusters installed on Microsoft Windows Azure platform with RDF datasets of size 10, 20, and 30 million triples. The results of the experiment show that Presto has a much higher performance than Hive can be used to process big RDF data. The thesis also proposes an architecture based on Presto, Presto-RDF, that can be used to process big RDF data.
ContributorsMammo, Mulugeta (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
156059-Thumbnail Image.png
Description
Assemblers and compilers provide feedback to a programmer in the form of error messages. These error messages become input to the debugging model of the programmer. For the programmer to fix an error, they should first locate the error in the program, understand what is causing that error, and finally

Assemblers and compilers provide feedback to a programmer in the form of error messages. These error messages become input to the debugging model of the programmer. For the programmer to fix an error, they should first locate the error in the program, understand what is causing that error, and finally resolve that error. Error messages play an important role in all three stages of fixing of errors. This thesis studies the effects of error messages in the context of teaching programming. Given an error message, this work investigates how it effects student’s way of 1) understanding the error, and 2) fixing the error. As part of the study, three error message types were developed – Default, Link and Example, to better understand the effects of error messages. The Default type provides an assembler-centric single line error message, the Link type provides a program-centric detailed error description with a hyperlink for more information, and the Example type provides a program centric detailed error description with a relevant example. All these error message types were developed for assembly language programming. A think aloud programming exercise was conducted as part of the study to capture the student programmer’s knowledge model. Different codes were developed to analyze the data collected as part of think aloud exercise. After transcribing, coding, and analyzing the data, it was found that the Link type of error message helped to fix the error in less time and with fewer steps. Among the three types, the Link type of error message also resulted in a significantly higher ratio of correct to incorrect steps taken by the programmer to fix the error.
ContributorsBeejady Murthy Kadekar, Harsha Kadekar (Author) / Sohoni, Sohum (Thesis advisor) / Craig, Scotty D. (Committee member) / Jordan, Shawn S (Committee member) / Gary, Kevin A (Committee member) / Arizona State University (Publisher)
Created2017
156212-Thumbnail Image.png
Description
Each programming language has a compiler associated with it which helps to identify logical or syntactical errors in the program. These compiler error messages play important part in the form of formative feedback for the programmer. Thus, the error messages should be constructed carefully, considering the affective and cognitive needs

Each programming language has a compiler associated with it which helps to identify logical or syntactical errors in the program. These compiler error messages play important part in the form of formative feedback for the programmer. Thus, the error messages should be constructed carefully, considering the affective and cognitive needs of programmers. This is especially true for systems that are used in educational settings, as the messages are typically seen by students who are novice programmers. If the error messages are hard to understand then they might discourage students from understanding or learning the programming language. The primary goal of this research is to identify methods to make the error messages more effective so that students can understand them better and simultaneously learn from their mistakes. This study is focused on understanding how the error message affects the understanding of the error and the approach students take to solve the error. In this study, three types of error messages were provided to the students. The first type is Default type error message which is an assembler centric error message. The second type is Link type error message which is a descriptive error message along with a link to the appropriate section of the PLP manual. The third type is Example type error message which is again a descriptive error message with an example of the similar type of error along with correction step. All these error types were developed for the PLP assembly language. A think-aloud experiment was designed and conducted on the students. The experiment was later transcribed and coded to understand different approach students take to solve different type of error message. After analyzing the result of the think-aloud experiment it was found that student read the Link type error message completely and they understood and learned from the error message to solve the error. The results also indicated that Link type was more helpful compare to other types of error message. The Link type made error solving process more effective compared to other error types.
ContributorsTanpure, Siddhant Bapusaheb (Author) / Sohoni, Sohum (Thesis advisor) / Gary, Kevin A (Committee member) / Craig, Scotty D. (Committee member) / Arizona State University (Publisher)
Created2018
156689-Thumbnail Image.png
Description
Since the advent of the internet and even more after social media platforms, the explosive growth of textual data and its availability has made analysis a tedious task. Information extraction systems are available but are generally too specific and often only extract certain kinds of information they deem necessary and

Since the advent of the internet and even more after social media platforms, the explosive growth of textual data and its availability has made analysis a tedious task. Information extraction systems are available but are generally too specific and often only extract certain kinds of information they deem necessary and extraction worthy. Using data visualization theory and fast, interactive querying methods, leaving out information might not really be necessary. This thesis explores textual data visualization techniques, intuitive querying, and a novel approach to all-purpose textual information extraction to encode large text corpus to improve human understanding of the information present in textual data.

This thesis presents a modified traversal algorithm on dependency parse output of text to extract all subject predicate object pairs from text while ensuring that no information is missed out. To support full scale, all-purpose information extraction from large text corpuses, a data preprocessing pipeline is recommended to be used before the extraction is run. The output format is designed specifically to fit on a node-edge-node model and form the building blocks of a network which makes understanding of the text and querying of information from corpus quick and intuitive. It attempts to reduce reading time and enhancing understanding of the text using interactive graph and timeline.
ContributorsHashmi, Syed Usama (Author) / Bansal, Ajay (Thesis advisor) / Bansal, Srividya (Committee member) / Gonzalez Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2018
156614-Thumbnail Image.png
Description
Academia is not what it used to be. In today’s fast-paced world, requirements

are constantly changing, and adapting to these changes in an academic curriculum

can be challenging. Given a specific aspect of a domain, there can be various levels of

proficiency that can be achieved by the students. Considering the wide array

Academia is not what it used to be. In today’s fast-paced world, requirements

are constantly changing, and adapting to these changes in an academic curriculum

can be challenging. Given a specific aspect of a domain, there can be various levels of

proficiency that can be achieved by the students. Considering the wide array of needs,

diverse groups need customized course curriculum. The need for having an archetype

to design a course focusing on the outcomes paved the way for Outcome-based

Education (OBE). OBE focuses on the outcomes as opposed to the traditional way of

following a process [23]. According to D. Clark, the major reason for the creation of

Bloom’s taxonomy was not only to stimulate and inspire a higher quality of thinking

in academia – incorporating not just the basic fact-learning and application, but also

to evaluate and analyze on the facts and its applications [7]. Instructional Module

Development System (IMODS) is the culmination of both these models – Bloom’s

Taxonomy and OBE. It is an open-source web-based software that has been

developed on the principles of OBE and Bloom’s Taxonomy. It guides an instructor,

step-by-step, through an outcomes-based process as they define the learning

objectives, the content to be covered and develop an instruction and assessment plan.

The tool also provides the user with a repository of techniques based on the choices

made by them regarding the level of learning while defining the objectives. This helps

in maintaining alignment among all the components of the course design. The tool

also generates documentation to support the course design and provide feedback

when the course is lacking in certain aspects.

It is not just enough to come up with a model that theoretically facilitates

effective result-oriented course design. There should be facts, experiments and proof

that any model succeeds in achieving what it aims to achieve. And thus, there are two

research objectives of this thesis: (i) design a feature for course design feedback and

evaluate its effectiveness; (ii) evaluate the usefulness of a tool like IMODS on various

aspects – (a) the effectiveness of the tool in educating instructors on OBE; (b) the

effectiveness of the tool in providing appropriate and efficient pedagogy and

assessment techniques; (c) the effectiveness of the tool in building the learning

objectives; (d) effectiveness of the tool in document generation; (e) Usability of the

tool; (f) the effectiveness of OBE on course design and expected student outcomes.

The thesis presents a detailed algorithm for course design feedback, its pseudocode, a

description and proof of the correctness of the feature, methods used for evaluation

of the tool, experiments for evaluation and analysis of the obtained results.
ContributorsRaj, Vaishnavi (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2018
156879-Thumbnail Image.png
Description
The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies

The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies used by these knowledge graphs. So, to make these knowledge graphs more accessible (even for non- experts) several question answering (QA) systems have been developed over the last decade. Due to the complexity of the task, several approaches have been undertaken that include techniques from natural language processing (NLP), information retrieval (IR), machine learning (ML) and the Semantic Web (SW). At a higher level, most question answering systems approach the question answering task as a conversion from the natural language question to its corresponding SPARQL query. These systems then utilize the query to retrieve the desired entities or literals. One approach to solve this problem, that is used by most systems today, is to apply deep syntactic and semantic analysis on the input question to derive the SPARQL query. This has resulted in the evolution of natural language processing pipelines that have common characteristics such as answer type detection, segmentation, phrase matching, part-of-speech-tagging, named entity recognition, named entity disambiguation, syntactic or dependency parsing, semantic role labeling, etc.

This has lead to NLP pipeline architectures that integrate components that solve a specific aspect of the problem and pass on the results to subsequent components for further processing eg: DBpedia Spotlight for named entity recognition, RelMatch for relational mapping, etc. A major drawback in this approach is error propagation that is a common problem in NLP. This can occur due to mistakes early on in the pipeline that can adversely affect successive steps further down the pipeline. Another approach is to use query templates either manually generated or extracted from existing benchmark datasets such as Question Answering over Linked Data (QALD) to generate the SPARQL queries that is basically a set of predefined queries with various slots that need to be filled. This approach potentially shifts the question answering problem into a classification task where the system needs to match the input question to the appropriate template (class label).

This thesis proposes a neural network approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination for the need of laborious feature engineering that can be cumbersome and error prone. The input question would be encoded into a vector representation. The model will be trained and evaluated on the LC-QuAD Dataset (Large-scale Complex Question Answering Dataset). The dataset was created explicitly for machine learning based QA approaches for learning complex SPARQL queries. The dataset consists of 5000 questions along with their corresponding SPARQL queries over the DBpedia dataset spanning 5042 entities and 615 predicates. These queries were annotated based on 38 unique templates that the model will attempt to classify. The resulting model will be evaluated against both the LC-QuAD dataset and the Question Answering Over Linked Data (QALD-7) dataset.

The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset.

After slot filling, the overall system achieves a macro F-score 0.419 on the LC- QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset.
ContributorsAthreya, Ram G (Author) / Bansal, Srividya (Thesis advisor) / Usbeck, Ricardo (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2018
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134322-Thumbnail Image.png
Description
Young adults do not know basic emergency preparedness skills. Although there are materials out there such as printed and online materials form Center for Disease Control, it is unlikely that college-age people will take the time to read them. Some individuals have addressed the issue of young adults not wanting

Young adults do not know basic emergency preparedness skills. Although there are materials out there such as printed and online materials form Center for Disease Control, it is unlikely that college-age people will take the time to read them. Some individuals have addressed the issue of young adults not wanting to read materials by creating a fun interactive game in the San Francisco area, but since the game must be played in person, a solution like that can only reach so far. Studies suggest that virtual worlds are effective in teaching people new skills, so I have created a virtual world that will teach people basic emergency preparedness skills in a way that is memorable and appealing to a college-age audience. The logic used to teach players the concepts of emergency preparedness is case-based reasoning. Case-based reasoning is the process of solving new problems by remembering similar solutions in the past. By creating a simulation emergency situation in a virtual world, young adults are more likely to know what to do in the case of an actual emergency.
ContributorsTeplik, Julie Rachel (Author) / Craig, Scotty (Thesis director) / Amresh, Ashish (Committee member) / WPC Graduate Programs (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
136283-Thumbnail Image.png
Description
This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation

This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation systems are also discussed, along with possible areas of future work.
ContributorsLane, Ryan Andrew (Author) / Bansal, Ajay (Thesis director) / Bansal, Srividya (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
133401-Thumbnail Image.png
Description
As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important to understand what causes humans to trust robots more or less during a collaborative task.  This research project aims to

As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important to understand what causes humans to trust robots more or less during a collaborative task.  This research project aims to investigate human-robot trust through a collaborative game of logic that can be played with a human and a robot together. This thesis details the development of a game of logic that could be used for this purpose. The game of logic is based upon a popular game in AI research called ‘Wumpus World’. The original Wumpus World game was a low-interactivity game to be played by humans alone. In this project, the Wumpus World game is modified for a high degree of interactivity with a human player, while also allowing the game to be played simultaneously by an AI algorithm.
ContributorsBoateng, Andrew Owusu (Author) / Sodemann, Angela (Thesis director) / Martin, Thomas (Committee member) / Software Engineering (Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05