Matching Items (5)
Filtering by

Clear all filters

150534-Thumbnail Image.png
Description
Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription

Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription and default logic, are expressive but lack efficient implementations. The nonmonotonic formalisms that are based on the declarative logic programming approach, such as Answer Set Programming (ASP), have efficient implementations but are not expressive enough for representing and reasoning with open domains. This dissertation uses the first-order stable model semantics, which extends both first-order logic and ASP, to relate circumscription to ASP, and to integrate DLs and ASP, thereby partially overcoming the limitations of the formalisms. By exploiting the relationship between circumscription and ASP, well-known action formalisms, such as the situation calculus, the event calculus, and Temporal Action Logics, are reformulated in ASP. The advantages of these reformulations are shown with respect to the generality of the reasoning tasks that can be handled and with respect to the computational efficiency. The integration of DLs and ASP presented in this dissertation provides a framework for integrating rules and ontologies for the semantic web. This framework enables us to perform nonmonotonic reasoning with DL knowledge bases. Observing the need to integrate action theories and ontologies, the above results are used to reformulate the problem of integrating action theories and ontologies as a problem of integrating rules and ontologies, thus enabling us to use the computational tools developed in the context of the latter for the former.
ContributorsPalla, Ravi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2012
154747-Thumbnail Image.png
Description
Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of

Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of classification of software requirements statements into categories easily comprehensible for developers for faster development and delivery, which till now was mostly done manually by software engineers - indeed a tedious job. However, most of the research was focused on classification of Non-functional requirements pertaining to intangible features such as security, reliability, quality and so on. It is indeed a challenging task to automatically classify functional requirements, those pertaining to how the system will function, especially those belonging to different and large enterprise systems. This requires exploitation of text mining capabilities. This thesis aims to investigate results of text classification applied on functional software requirements by creating a framework in R and making use of algorithms and techniques like k-nearest neighbors, support vector machine, and many others like boosting, bagging, maximum entropy, neural networks and random forests in an ensemble approach. The study was conducted by collecting and visualizing relevant enterprise data manually classified previously and subsequently used for training the model. Key components for training included frequency of terms in the documents and the level of cleanliness of data. The model was applied on test data and validated for analysis, by studying and comparing parameters like precision, recall and accuracy.
ContributorsSwadia, Japa (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Srividya (Committee member) / Gaffar, Ashraf (Committee member) / Arizona State University (Publisher)
Created2016
154694-Thumbnail Image.png
Description
Despite incremental improvements over decades, academic planning solutions see relatively little use in many industrial domains despite the relevance of planning paradigms to those problems. This work observes four shortfalls of existing academic solutions which contribute to this lack of adoption.

To address these shortfalls this work defines model-independent semantics for

Despite incremental improvements over decades, academic planning solutions see relatively little use in many industrial domains despite the relevance of planning paradigms to those problems. This work observes four shortfalls of existing academic solutions which contribute to this lack of adoption.

To address these shortfalls this work defines model-independent semantics for planning and introduces an extensible planning library. This library is shown to produce feasible results on an existing benchmark domain, overcome the usual modeling limitations of traditional planners, and accommodate domain-dependent knowledge about the problem structure within the planning process.
ContributorsJonas, Michael (Author) / Gaffar, Ashraf (Thesis advisor) / Fainekos, Georgios (Committee member) / Doupe, Adam (Committee member) / Herley, Cormac (Committee member) / Arizona State University (Publisher)
Created2016
155205-Thumbnail Image.png
Description
When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive but effective. Remote collaboration is inexpensive but inefficient. In, order to gain the most benefit from collaboration there needs to

When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive but effective. Remote collaboration is inexpensive but inefficient. In, order to gain the most benefit from collaboration there needs to be remote collaboration that is not only cheap but also as efficient as physical collaboration.

Remotely collaborating on software design relies on general tools such as Word, and Excel. These tools are then shared in an inefficient manner by using either email, cloud based file locking tools, or something like google docs. Because these tools either increase the number of design building blocks, or limit the number

of available times in which one can work on a specific document, they drastically decrease productivity.

This thesis outlines a new methodology to increase design productivity, accom- plished by providing design specific collaboration. Using version control systems, this methodology allows for effective project collaboration between remotely lo- cated design teams. The methodology of this paper encompasses role management, policy management, and design artifact management, including nonfunctional re- quirements. Version control can be used for different design products, improving communication and productivity amongst design teams. This thesis outlines this methodology and then outlines a proof of concept tool that embodies the core of these principles.
ContributorsPike, Shawn (Author) / Gaffar, Ashraf (Thesis advisor) / Lindquist, Timothy (Committee member) / Whitehouse, Richard (Committee member) / Arizona State University (Publisher)
Created2016
154330-Thumbnail Image.png
Description
A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space

A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space and time complexity is software complexity. An algorithm with multiple lines of pseudocode might sometimes be simpler to understand that the one with fewer lines. So, it is crucial to determine the Algorithmic Understandability for an algorithm, in order to better understand Software Complexity. This work deals with understanding Software Complexity from a cognitive angle. Also, it is vital to compute the effect of reducing cognitive complexity. The work aims to prove three important statements. The first being, that, while algorithmic complexity is a part of software complexity, software complexity does not solely and entirely mean algorithmic Complexity. Second, the work intends to bring to light the importance of cognitive understandability of algorithms. Third, is about the impact, reducing Cognitive Complexity, would have on Software Design and Development.
ContributorsMannava, Manasa Priyamvada (Author) / Ghazarian, Arbi (Thesis advisor) / Gaffar, Ashraf (Committee member) / Bansal, Ajay (Committee member) / Arizona State University (Publisher)
Created2016